Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Patch-Sparsity-Based Image Inpainting Through a Facet Deduced Directional Derivative

Published: 01 May 2019 Publication History

Abstract

This paper presents a patch-sparsity-based image inpainting algorithm through a facet deduced directional derivative. The algorithm could ensure the continuity of boundaries of the inpainted region and achieve a better performance on restoring the missing structure of an image. In this paper, two improvements are proposed. First, the facet model is introduced to get direction features of the image, which could efficiently reduce the effect of noises. The first-order directional derivatives, along with pixel values, are used to measure the difference between patches. Consequently, a more reliable and accurate matching result is promised. At the same time, the local patch consistency constraint of sparse representation of the target patch is also rewritten in the form of the first-order directional derivative. Therefore, a more precise sparse linear combination could be obtained under constraints for both color and derivative information. Second, the value of patch confidence in the traditional exemplar-based inpainting algorithms drops sharply in the late stage so that the data term or structure sparsity has little influence on priority function. Aiming at this problem, the algorithm makes a modification to the calculating of priority. Thus, the filling order decided by priority function appears more reasonable as a result of a better balance between the values of modified confidence and structure sparsity. Experiments on different types of damages to images show the superiority of the algorithm.

References

[1]
A. A. Efros and T. K. Leung, “Texture synthesis by non-parametric sampling,” in Proc. IEEE Int. Conf. Comput. Vis., vol. 2, Sep. 1999, pp. 1033–1038.
[2]
L. Y. Wei and M. Levoy, “Fast texture synthesis using tree-structured vector quantization,” in Proc. 24th Annu. Conf. Comput. Graph. Interact. Techn., 2000, pp. 479–488.
[3]
M. Ashikhmin, “Synthesizing natural textures,” in Proc. ACM Symp. Interact. 3D Graph., 2001, pp. 217–226.
[4]
L. Liang, C. Liu, Y.-Q. Xu, B. Guo, and H.-Y. Shum, “Real-time texture synthesis by patch-based sampling,” ACM Trans. Graph., vol. 20, no. 3, pp. 127–150, Jul. 2001.
[5]
Q. Wu and Y. Yu, “Feature matching and deformation for texture synthesis,” ACM Trans. Graph., vol. 23, no. 3, pp. 364–367, 2004.
[6]
A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process., vol. 13, no. 9, pp. 1200–1212, Sep. 2004.
[7]
Z. Xu and J. Sun, “Image inpainting by patch propagation using patch sparsity,” IEEE Trans. Image Process., vol. 19, no. 5, pp. 1153–1165, May 2010.
[8]
S. Hesabi and N. Mahdavi-Amiri, “A modified patch propagation-based image inpainting using patch sparsity,” in Proc. IEEE CSI Int. Symp. Artif. Intell. Signal Process., 2012, pp. 43–48.
[9]
L. Yang, T. Xiao-Jian, W. Qing, S. Shang-Xin, and S. Xiao-Lin, “Image inpainting algorithm based on regional segmentation and adaptive window exemplar,” in Proc. IEEE Conf. Adv. Comput. Control, Shenyang, China, vol. 1, Mar. 2010, pp. 656–659.
[10]
Y. Liu and V. Caselles, “Exemplar-based image inpainting using multiscale graph cuts,” IEEE Trans. Image Process., vol. 22, no. 5, pp. 1699–1711, May 2013.
[11]
H. Wang, L. Jiang, R. Liang, and X.-X. Li, “Exemplar-based image inpainting using structure consistent patch matching,” Neurocomputing, vol. 269, pp. 90–96, Dec. 2017.
[12]
H. Xiong, Y. Xu, Y. Zheng, and C. Chen, “Priority belief propagation-based inpainting prediction with tensor voting projected structure in video compression,” IEEE Trans. Circuits Syst. Video Technol., vol. 21, no. 8, pp. 1115–1129, Aug. 2011.
[13]
J. Hays and A. A. Efros, “Scene completion using millions of photographs,” Commun. ACM, vol. 51, no. 10, pp. 87–94, 2015.
[14]
L. Cai and T. Kim, “Context-driven hybrid image inpainting,” IET Image Process., vol. 9, no. 10, pp. 866–873, Oct. 2015.
[15]
V. Kumar, J. Mukherjee, and S. K. Das Mandal, “Image inpainting through metric labeling via guided patch mixing,” IEEE Trans. Image Process., vol. 25, no. 11, pp. 5212–5226, Nov. 2016.
[16]
M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proc. 27th Annu. Conf. Comput. Graph. Interact. Techn., 2000, pp. 417–424.
[17]
M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-Stokes, fluid dynamics, and image and video inpainting,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Dec. 2001, pp. 417–424.
[18]
J. Shen and T. F. Chan, “Mathematical models for local nontexture inpaintings,” SIAM J. Appl. Math., vol. 62, no. 3, pp. 1019–1043, 2002.
[19]
T. F. Chan and J. Shen, “Nontexture inpainting by curvature-driven diffusions,” J. Vis. Commun. Image Represent., vol. 12, no. 4, pp. 436–449, 2001.
[20]
C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro, and J. Verdera, “Filling-in by joint interpolation of vector fields and gray levels,” IEEE Trans. Image Process., vol. 10, no. 8, pp. 1200–1211, Aug. 2001.
[21]
F. Bornemann and T. März, “Fast image inpainting based on coherence transport,” J. Math. Imag. Vis., vol. 28, no. 3, pp. 259–278, 2007.
[22]
P. Getreuer, “Total variation inpainting using split bregman,” Image Process. OnLine, vol. 2, pp. 147–157, Jul. 2012.
[23]
N. Amrini, J. Sagrisà, P. Peter, and J. Weickert, “Diffusion-based inpainting for coding remote-sensing data,” IEEE Geosci. Remote Sens. Lett., vol. 14, no. 8, pp. 1203–1207, Aug. 2017.
[24]
J. Zhang, D. Zhao, R. Xiong, S. Ma, and W. Gao, “Image restoration using joint statistical modeling in a space-transform domain,” IEEE Trans. Circuits Syst. Video Technol., vol. 24, no. 6, pp. 915–928, Jun. 2014.
[25]
O. G. Guleryuz, “Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising—Part II: Adaptive algorithms,” IEEE Trans. Image Process., vol. 15, no. 3, pp. 555–571, Mar. 2006.
[26]
M. J. Fadili, J.-L. Starck, and F. Murtagh, “Inpainting and zooming using sparse representations,” Comput. J., vol. 52, no. 1, pp. 64–79, Jan. 2009.
[27]
B. Wohlberg, “Inpainting by joint optimization of linear combinations of exemplars,” IEEE Signal Process. Lett., vol. 18, no. 1, pp. 75–78, Jan. 2011.
[28]
B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, no. 13, pp. 607–609, 1996.
[29]
M. Elad, J.-L. Starck, P. Querre, and D. L. Donoho, “Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA),” Appl. Comput. Harmon. Anal., vol. 19, no. 3, pp. 340–358, 2005.
[30]
M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process., vol. 54, no. 11, pp. 4311–4322, Nov. 2006.
[31]
O. G. Guleryuz, “Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising—Part I: Theory,” IEEE Trans. Image Process., vol. 15, no. 3, pp. 539–554, Mar. 2006.
[32]
Z. Gao, Q. Li, R. Zhai, M. Shan, and F. Lin, “Adaptive and robust sparse coding for laser range data denoising and inpainting,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 12, pp. 2165–2175, Dec. 2016.
[33]
J. Sulam and M. Elad, “Large inpainting of face images with trainlets,” IEEE Trans. Image Process., vol. 23, no. 12, pp. 1839–1843, Dec. 2016.
[34]
J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Proc. Adv. Neural Inf. Process. Syst., vol. 1, pp. 341–349, 2012.
[35]
C. Yang, X. Lin, Z. Lin, E. Shechtman, O. Wang, and H. Li, “High-resolution image inpainting using multi-scale neural patch synthesis,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 4076–4084.
[36]
K. Sasaki, S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Joint gap detection and inpainting of line drawings,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 5768–5776.
[37]
D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 2536–2544.
[38]
R. Gao and K. Grauman, “On-demand learning for deep image restoration,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Oct. 2017, pp. 1095–1104.
[39]
Q. Ji and R. M. Haralick, “Efficient facet edge detection and quantitative performance evaluation,” Pattern Recognit., vol. 35, no. 3, pp. 689–700, Mar. 2002.
[40]
S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, Dec. 2000.
[41]
X. Bai and Y. Bi, “Derivative entropy-based contrast measure for infrared small-target detection,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 4, pp. 2452–2466, Apr. 2018.
[42]
H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3440–3451, Nov. 2006.

Cited By

View all
  • (2024)High-Fidelity and Efficient Pluralistic Image Completion With TransformersIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.342483546:12(9612-9629)Online publication date: 1-Dec-2024
  • (2024)Dynamic Graph Memory Bank for Video InpaintingIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.341106134:11_Part_1(10831-10844)Online publication date: 7-Jun-2024
  • (2024)Transformer-Based Image Inpainting Detection via Label Decoupling and Constrained Adversarial TrainingIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.329927834:3(1857-1872)Online publication date: 1-Mar-2024
  • Show More Cited By

Index Terms

  1. Patch-Sparsity-Based Image Inpainting Through a Facet Deduced Directional Derivative
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image IEEE Transactions on Circuits and Systems for Video Technology
      IEEE Transactions on Circuits and Systems for Video Technology  Volume 29, Issue 5
      May 2019
      330 pages

      Publisher

      IEEE Press

      Publication History

      Published: 01 May 2019

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 18 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)High-Fidelity and Efficient Pluralistic Image Completion With TransformersIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.342483546:12(9612-9629)Online publication date: 1-Dec-2024
      • (2024)Dynamic Graph Memory Bank for Video InpaintingIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.341106134:11_Part_1(10831-10844)Online publication date: 7-Jun-2024
      • (2024)Transformer-Based Image Inpainting Detection via Label Decoupling and Constrained Adversarial TrainingIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.329927834:3(1857-1872)Online publication date: 1-Mar-2024
      • (2023)Image Inpainting via Correlated Multi-Resolution Feature ProjectionIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.331506130:9(5953-5964)Online publication date: 13-Sep-2023
      • (2023)Coarse-to-Fine Task-Driven Inpainting for Geoscience ImagesIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.327671933:12(7170-7182)Online publication date: 1-Dec-2023
      • (2023)Divide-and-Conquer Completion Network for Video InpaintingIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2022.322591133:6(2753-2766)Online publication date: 1-Jun-2023
      • (2023)Structural similarity-based Bi-representation through true noise level for noise-robust face super-resolutionMultimedia Tools and Applications10.1007/s11042-022-14325-682:17(26255-26288)Online publication date: 25-Jan-2023
      • (2022)Generative Memory-Guided Semantic Reasoning Model for Image InpaintingIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2022.318816932:11(7432-7447)Online publication date: 1-Nov-2022
      • (2020)Face hallucination via multiple feature learning with hierarchical structureInformation Sciences: an International Journal10.1016/j.ins.2019.06.017512:C(416-430)Online publication date: 1-Feb-2020
      • (2020)A decentralised approach to scene completion using distributed feature hashgramMultimedia Tools and Applications10.1007/s11042-019-08403-579:15-16(9799-9817)Online publication date: 1-Apr-2020
      • Show More Cited By

      View Options

      View options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media