Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3562939.3565674acmconferencesArticle/Chapter ViewAbstractPublication PagesvrstConference Proceedingsconference-collections
abstract

An AI-empowered Cloud Solution towards End-to-End 2D-to-3D Image Conversion for Autostereoscopic 3D Display

Published: 29 November 2022 Publication History

Abstract

Autostereoscopic displays allow the users to view the 3D content on electronic displays without wearing any glasses. However, the content for glass-free 3D displays needs to be in 3D format such that novel views could be synthesized. Unfortunately, nowadays images/videos are still normally captured in 2D which cannot be directly utilized for glass-free 3D displays. In this paper, we introduce an AI-empowered cloud solution towards end-to-end 2D-to-3D image conversion for autostereoscopic 3D displays, or “CONVAS (3D)” in short. Taking a single 2D image as the input, CONVAS (3D) is able to automatically convert the input 2D image and generate an image suitable for a target autostereoscopic 3D display. It is implemented on a web-based server such that it can allow the users to submit the conversion task and to retrieve the results without geographical constraints.

References

[1]
Jason Geng. 2013. Three-dimensional display technologies. Advances in Optics and Photonics (2013). https://doi.org/10.1364/AOP.5.000456
[2]
Chris Slinger, Colin Cameron, and Maurice Stanley. 2005. Computer-generated holography as a generic display technology. Computer (2005).
[3]
Gregg E. Favalora. 2005. Volumetric 3D displays and application infrastructure. Computer (2005).
[4]
Xiao Xiao, Bahram Javidi, Manuel Martinez-Corral, and Adrian Stern. 2013. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. Applied Optics (2013).
[5]
Hyunkyung Kwon and Hee-Jin Choi. 2012. A time-sequential mutli-view autostereoscopic display without resolution loss using a multi-directional backlight unit and an LCD panel.
[6]
Masahiro Yamaguchi and Ryo Higashida. 2016. 3D touchable holographic lightfield display. Applied Optics (2016).
[7]
Neil A. Dodgson. 2005. Autostereoscopic 3D displays. Computer (2005).
[8]
X Xia, Y Guan, A State, TJ Cham, H Fuchs. Towards Efficient 3D Calibration for Different Types of Multi-view Autostereoscopic 3D Displays. Proceedings of Computer Graphics International 2018.
[9]
Liao Y., Donne S., and Geiger A. 2018. Deep marching cubes: Learning explicit surface representations. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
[10]
Ranjan A., Bolkart T., Sanyal S., and Black M. J. 2018. Generating 3D faces using convolutional mesh autoencoders. In Proc. Of the European Conf. on Computer Vision (ECCV), 2018.
[11]
Wang N., Zhang Y., Li Z., Fu Y., Liu W., and Jiang Y.-G. 2018a. Pixel2Mesh: Generating 3D mesh models from single RGB images. In Proc. of the European Conf. on Computer Vision (ECCV), 2018.
[12]
Mescheder L., Oechsle M., Niemeyer M., Nowozin S., and Geiger A. 2018. Occupancy networks: Learning 3d reconstruction in function space. arXiv preprint arXiv:1812.03828, 2018.
[13]
Chen Z. and Zhang H. 2019. Learning implicit fields for generative shape modeling. CVPR, 2019.
[14]
Park J. J., Florence P., Straub J., Newcombe R., and Lovegrove S. 2019. Deepsdf: Learning continuous signed distance functions for shape representation. arXiv preprint arXiv:1901.05103, 2019.
[15]
Pratul P Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng. Learning to synthesize a 4d rgbd light field from a single image. In ICCV, 2017.
[16]
Chen Liu, Jimei Yang, Duygu Ceylan, Ersin Yumer, and Yasutaka Furukawa. Planenet: Piece-wise planar reconstruction from a single rgb image. In CVPR, 2018.
[17]
Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In CVPR, 2020.
[18]
Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In ECCV, 2018.
[19]
Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High-resolution image inpainting using multiscale neural patch synthesis. In CVPR, volume 1, page 3, 2017.
[20]
Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. "3D Photography using Context-aware Layered Depth Inpainting", in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
[21]
Ren Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler and Vladlen Koltun. 2020. Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020.
[22]
HS Tan, J Xia, Y He, YQ Guan. 2010. A system for capturing, rendering and multiplexing images on multi-view autostereoscopic display. 2010 International Conference on Cyberworlds, 2010.

Index Terms

  1. An AI-empowered Cloud Solution towards End-to-End 2D-to-3D Image Conversion for Autostereoscopic 3D Display
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        VRST '22: Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology
        November 2022
        466 pages
        ISBN:9781450398893
        DOI:10.1145/3562939
        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 29 November 2022

        Check for updates

        Qualifiers

        • Abstract
        • Research
        • Refereed limited

        Conference

        VRST '22

        Acceptance Rates

        Overall Acceptance Rate 66 of 254 submissions, 26%

        Upcoming Conference

        VRST '24

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 91
          Total Downloads
        • Downloads (Last 12 months)33
        • Downloads (Last 6 weeks)3
        Reflects downloads up to 01 Oct 2024

        Other Metrics

        Citations

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media