Nothing Special   »   [go: up one dir, main page]

Yu et al., 2023 - Google Patents

ShadowMover: Automatically Projecting Real Shadows onto Virtual Object

Yu et al., 2023

Document ID
15537123907785527961
Author
Yu P
Guo J
Huang F
Chen Z
Wang C
Zhang Y
Guo Y
Publication year
Publication venue
IEEE Transactions on Visualization and Computer Graphics

External Links

Snippet

Inserting 3D virtual objects into real-world images has many applications in photo editing and augmented reality. One key issue to ensure the reality of the composite whole scene is to generate consistent shadows between virtual and real objects. However, it is challenging …
Continue reading at ieeexplore.ieee.org (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding, e.g. from bit-mapped to non bit-mapped
    • G06T9/001Model-based coding, e.g. wire frame
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image, e.g. from bit-mapped to bit-mapped creating a different image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

Similar Documents

Publication Publication Date Title
Tewari et al. State of the art on neural rendering
Hedman et al. Casual 3D photography
Fridman et al. Scenescape: Text-driven consistent scene generation
Hedman et al. Scalable inside-out image-based rendering
Li et al. Crowdsampling the plenoptic function
US9082224B2 (en) Systems and methods 2-D to 3-D conversion using depth access segiments to define an object
US20080228449A1 (en) Systems and methods for 2-d to 3-d conversion using depth access segments to define an object
US20080225045A1 (en) Systems and methods for 2-d to 3-d image conversion using mask to model, or model to mask, conversion
US20080226181A1 (en) Systems and methods for depth peeling using stereoscopic variables during the rendering of 2-d to 3-d images
Bemana et al. Eikonal fields for refractive novel-view synthesis
US20080226160A1 (en) Systems and methods for filling light in frames during 2-d to 3-d image conversion
Zhang et al. Personal photograph enhancement using internet photo collections
Luo et al. A disocclusion inpainting framework for depth-based view synthesis
Griffiths et al. OutCast: Outdoor Single‐image Relighting with Cast Shadows
Wei et al. Object-based illumination estimation with rendering-aware neural networks
Li et al. 3d human avatar digitization from a single image
Liu et al. Static scene illumination estimation from videos with applications
Park et al. Instant panoramic texture mapping with semantic object matching for large-scale urban scene reproduction
Zhu et al. Learning-based inverse rendering of complex indoor scenes with differentiable monte carlo raytracing
Xu et al. Scalable image-based indoor scene rendering with reflections
Monnier et al. Differentiable blocks world: Qualitative 3d decomposition by rendering primitives
Li et al. Neulighting: Neural lighting for free viewpoint outdoor scene relighting with unconstrained photo collections
Wang et al. Complete 3d human reconstruction from a single incomplete image
Chen et al. S-NeRF++: Autonomous Driving Simulation via Neural Reconstruction and Generation
Nicolet et al. Repurposing a relighting network for realistic compositions of captured scenes