A novel 3d-unet deep learning framework based on high-dimensional bilateral grid for edge consistent single image depth estimation

M Sharma, A Sharma, KR Tushar… - arXiv preprint arXiv …, 2021 - arxiv.org
M Sharma, A Sharma, KR Tushar, A Panneer
arXiv preprint arXiv:2105.10129, 2021arxiv.org
The task of predicting smooth and edge-consistent depth maps is notoriously difficult for
single image depth estimation. This paper proposes a novel Bilateral Grid based 3D
convolutional neural network, dubbed as 3DBG-UNet, that parameterizes high dimensional
feature space by encoding compact 3D bilateral grids with UNets and infers sharp geometric
layout of the scene. Further, another novel 3DBGES-UNet model is introduced that integrate
3DBG-UNet for inferring an accurate depth map given a single color view. The 3DBGES …
The task of predicting smooth and edge-consistent depth maps is notoriously difficult for single image depth estimation. This paper proposes a novel Bilateral Grid based 3D convolutional neural network, dubbed as 3DBG-UNet, that parameterizes high dimensional feature space by encoding compact 3D bilateral grids with UNets and infers sharp geometric layout of the scene. Further, another novel 3DBGES-UNet model is introduced that integrate 3DBG-UNet for inferring an accurate depth map given a single color view. The 3DBGES-UNet concatenates 3DBG-UNet geometry map with the inception network edge accentuation map and a spatial object's boundary map obtained by leveraging semantic segmentation and train the UNet model with ResNet backbone. Both models are designed with a particular attention to explicitly account for edges or minute details. Preserving sharp discontinuities at depth edges is critical for many applications such as realistic integration of virtual objects in AR video or occlusion-aware view synthesis for 3D display applications.The proposed depth prediction network achieves state-of-the-art performance in both qualitative and quantitative evaluations on the challenging NYUv2-Depth data. The code and corresponding pre-trained weights will be made publicly available.
arxiv.org