Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Deep Exemplar-Based Color Transfer for 3D Model

Published: 01 August 2022 Publication History

Abstract

Recoloring 3D models is a challenging task that often requires professional knowledge and tedious manual efforts. In this article, we present the first deep-learning framework for exemplar-based 3D model recolor, which can automatically transfer the colors from a reference image to the 3D model texture. Our framework consists of two modules to solve two major challenges in the 3D color transfer. First, we propose a new feed-forward Color Transfer Network to achieve high-quality semantic-level color transfer by finding dense semantic correspondences between images. Second, considering 3D model constraints such as UV mapping, we design a novel 3D Texture Optimization Module which can generate a seamless and coherent texture by combining color transferred results rendered in multiple views. Experiments show that our method performs robustly and generalizes well to various kinds of models.

References

[1]
E. Reinhard, M. Adhikhmin, B. Gooch, and P. Shirley, “Color transfer between images,” IEEE Comput. Graphics Appl., vol. 21, no. 5, pp. 34–41, Jul./Aug. 2001.
[2]
F. Pitie, A. C. Kokaram, and R. Dahyot, “N-dimensional probability density function transfer and its application to color transfer,” in Proc. 10th IEEE Int. Conf. Comput. Vis., 2005, vol. 2, pp. 1434–1439.
[3]
D. Freedman and P. Kisilev, “Object-to-object color transfer: Optimal flows and SMSP transformations,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2010, pp. 287–294.
[4]
T. Welsh, M. Ashikhmin, and K. Mueller, “Transferring color to greyscale images,” ACM Trans. Graph., vol. 21, pp. 277–280, Jul. 2002.
[5]
Y.-W. Tai, J. Jia, and C.-K. Tang, “Local color transfer via probabilistic segmentation by expectation-maximization,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2005, pp. 747–754.
[6]
K. Sunkavalli, M. K. Johnson, W. Matusik, and H. Pfister, “Multi-scale image harmonization,” ACM Trans. Graph., vol. 29, pp. 125:1–125:10, Jul. 2010.
[7]
Y. Shih, S. Paris, F. Durand, and W. T. Freeman, “Data-driven hallucination of different times of day from a single outdoor photo,” ACM Trans. Graph., vol. 32, pp. 200:1–200:11, Nov. 2013.
[8]
P.-Y. Laffont, Z. Ren, X. Tao, C. Qian, and J. Hays, “Transient attributes for high-level understanding and editing of outdoor scenes,” ACM Trans. Graph., vol. 33, pp. 149:1–149:11, Jul. 2014.
[9]
S. Bae, S. Paris, and F. Durand, “Two-scale tone management for photographic look,” ACM Trans. Graph., vol. 25, pp. 637–645, Jul. 2006.
[10]
X. An and F. Pellacini, “User-controllable color transfer,” Comput. Graph. Forum, vol. 29, pp. 263–271, 2010.
[11]
F. Luan, S. Paris, E. Shechtman, and K. Bala, “Deep photo style transfer,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 6997–7005.
[12]
J. Liao, Y. Yao, L. Yuan, G. Hua, and S. B. Kang, “Visual attribute transfer through deep image analogy,” ACM Trans. Graph., vol. 36, pp. 120:1–120:15, Jul. 2017.
[13]
M. He, J. Liao, D. Chen, L. Yuan, and P. V. Sander, “Progressive color transfer with dense semantic correspondences,” ACM Trans. Graphics, vol. 38, no. 2, pp. 1–18, 2019.
[14]
X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7794–7803.
[15]
J. R. Gardneret al., “Deep manifold traversal: Changing labels with convolutional features,” 2015, arXiv:1511.06421.
[16]
A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin, “Image analogies,” in Proc. 28th Annu. Conf. Comput. Graph. Interactive Techn., 2001, pp. 327–340.
[17]
C. Li and M. Wand, “Combining Markov random fields and convolutional neural networks for image synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2479–2486.
[18]
A. Selim, M. Elgharib, and L. Doyle, “Painting style transfer for head portraits using convolutional neural networks,” ACM Trans. Graph., vol. 35, pp. 129:1–129:18, Jul. 2016.
[19]
J.-D. Yoo, M.-K. Park, J.-H. Cho, and K. H. Lee, “Local color transfer between images using dominant colors,” J. Electron. Imag., vol. 22, no. 3, 2013, Art. no.
[20]
B. Arbelot, R. Vergne, T. Hurtut, and J. Thollot, “Local texture-based color transfer and colorization,” Comput. Graph., vol. 62, pp. 15–27, 2017.
[21]
H. Chang, O. Fried, Y. Liu, S. DiVerdi, and A. Finkelstein, “Palette-based photo recoloring,” ACM Trans. Graph., vol. 34, Jul. 2015, Art. no.
[22]
L. A. Gatys, M. Bethge, A. Hertzmann, and E. Shechtman, “Preserving color in neural artistic style transfer,” CoRR, vol. abs/1606.05897, 2016.
[23]
L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman, “Controlling perceptual factors in neural style transfer,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3730–3738.
[24]
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. 3rd Int. Conf. Learn. Representations, 2015.
[25]
R. Mechrez, E. Shechtman, and L. Zelnik-Manor, “Photorealistic style transfer with screened poisson equation,” in Proc. Brit. Mach. Vis. Conf., 2017, pp. 153.1–153.12.
[26]
L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp. 834–848, Apr. 2018.
[27]
Y. Li, M.-Y. Liu, X. Li, M.-H. Yang, and J. Kautz, “A closed-form solution to photorealistic image stylization,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 453–468.
[28]
J. Yoo, Y. Uh, S. Chun, B. Kang, and J.-W. Ha, “Photorealistic style transfer via wavelet transforms,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 9035–9044.
[29]
A. Levin, D. Lischinski, and Y. Weiss, “Colorization using optimization,” ACM Trans. Graph., vol. 23, pp. 689–694, Aug. 2004.
[30]
S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Let there be color!: Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification,” ACM Trans. Graph., vol. 35, pp. 110:1–110:11, Jul. 2016.
[31]
R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in Proc. Eur. Conf. Comput. Vis., 2016, pp. 649–666.
[32]
M. He, D. Chen, J. Liao, P. V. Sander, and L. Yuan, “Deep exemplar-based colorization,” ACM Trans. Graphics, vol. 37, no. 4, pp. 1–16, 2018.
[33]
B. Zhanget al., “Deep exemplar-based video colorization,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8044–8053.
[34]
H. Kato, Y. Ushiku, and T. Harada, “Neural 3D mesh renderer,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 3907–3916.
[35]
H.-T. D. Liu, M. Tao, and A. Jacobson, “Paparazzi: Surface editing by way of multi-view image processing,” ACM Trans. Graph., vol. 37, 2018, Art. no.
[36]
A. Mordvintsev, N. Pezzotti, L. Schubert, and C. Olah, “Differentiable image parameterizations,” Distill, vol. 3, no. 7, p. e12, 2018.
[37]
J. Fišeret al., “StyLit: Illumination-guided example-based stylization of 3D renderings,” ACM Trans. Graph., vol. 35, no. 4, pp. 1–11, 2016.
[38]
J. Liu, Z. Lian, and J. Xiao, “Auto-colorization of 3D models from images,” in Proc. SIGGRAPH Asia Tech. Briefs, 2017, pp. 15:1–15:4.
[39]
S. Bi, X. Han, and Y. Yu, “An L1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition,” ACM Trans. Graph., vol. 34, pp. 78:1–78:12, Jul. 2015.
[40]
R. Mechrez, I. Talmi, and L. Zelnik-Manor, “The contextual loss for image transformation with non-aligned data,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 768–783.
[41]
I. J. Goodfellowet al., “Generative adversarial nets,” in Proc. 27th Int. Conf. Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[42]
X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares generative adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2813–2821.
[43]
C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “PatchMatch: A randomized correspondence algorithm for structural image editing,” ACM Trans. Graph., vol. 28, pp. 24:1–24:11, Jul. 2009.

Cited By

View all
  • (2024)NeRF-Art: Text-Driven Neural Radiance Fields StylizationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.328340030:8(4983-4996)Online publication date: 1-Aug-2024
  • (2024)Few-shot satellite image classification for bringing deep learning on board OPS-SATExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.123984251:COnline publication date: 24-Jul-2024

Index Terms

  1. Deep Exemplar-Based Color Transfer for 3D Model
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image IEEE Transactions on Visualization and Computer Graphics
          IEEE Transactions on Visualization and Computer Graphics  Volume 28, Issue 8
          Aug. 2022
          261 pages

          Publisher

          IEEE Educational Activities Department

          United States

          Publication History

          Published: 01 August 2022

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 18 Feb 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)NeRF-Art: Text-Driven Neural Radiance Fields StylizationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.328340030:8(4983-4996)Online publication date: 1-Aug-2024
          • (2024)Few-shot satellite image classification for bringing deep learning on board OPS-SATExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.123984251:COnline publication date: 24-Jul-2024

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media