Nothing Special   »   [go: up one dir, main page]

Yang et al., 2015 - Google Patents

Depth map super-resolution using stereo-vision-assisted model

Yang et al., 2015

Document ID
1745036875158037497
Author
Yang Y
Gao M
Zhang J
Zha Z
Wang Z
Publication year
Publication venue
Neurocomputing

External Links

Snippet

In this paper, we propose a novel Stereo-Vision-Assisted (SVA) model for depth map super- resolution. Given a low-resolution depth map as input, we recover a high-resolution depth map using the registered high-resolution color stereo image pair. First, based on the mutual …
Continue reading at www.sciencedirect.com (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration, e.g. from bit-mapped to bit-mapped creating a similar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image, e.g. from bit-mapped to bit-mapped creating a different image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS

Similar Documents

Publication Publication Date Title
Long et al. Sparseneus: Fast generalizable neural surface reconstruction from sparse views
US11210803B2 (en) Method for 3D scene dense reconstruction based on monocular visual slam
Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency
Yang et al. Spatial-depth super resolution for range images
Yang et al. Depth map super-resolution using stereo-vision-assisted model
Sheng et al. Cross-view recurrence-based self-supervised super-resolution of light field
Lu et al. Global-local fusion network for face super-resolution
Ye et al. DRM-SLAM: Towards dense reconstruction of monocular SLAM with scene depth fusion
Atapour-Abarghouei et al. Generative adversarial framework for depth filling via wasserstein metric, cosine transform and domain transfer
Xiang et al. Deep optical flow supervised learning with prior assumptions
Duan et al. RGB-Fusion: Monocular 3D reconstruction with learned depth prediction
Bazrafkan et al. Semiparallel deep neural network hybrid architecture: first application on depth from monocular camera
Liu et al. Video frame interpolation via optical flow estimation with image inpainting
Liu et al. A survey on deep learning methods for scene flow estimation
Zhou et al. A superior image inpainting scheme using Transformer-based self-supervised attention GAN model
Ren et al. Surface normal and Gaussian weight constraints for indoor depth structure completion
Li et al. Depth estimation based on monocular camera sensors in autonomous vehicles: A self-supervised learning approach
Wang et al. Splatflow: Learning multi-frame optical flow via splatting
Qi et al. Sparse prior guided deep multi-view stereo
Barranco et al. Joint direct estimation of 3d geometry and 3d motion using spatio temporal gradients
Tang et al. MFAGAN: A multiscale feature-attention generative adversarial network for infrared and visible image fusion
Zhang et al. Unsupervised detail-preserving network for high quality monocular depth estimation
Ou et al. A scene segmentation algorithm combining the body and the edge of the object
Qi et al. Unsupervised multi-view stereo network based on multi-stage depth estimation
Lin et al. A-SATMVSNet: An attention-aware multi-view stereo matching network based on satellite imagery