Abstract
Color pencil drawing is well-loved due to its rich expressiveness. This paper proposes an approach for generating feature-preserving color pencil drawings from photographs. To mimic the tonal style of color pencil drawings, which are much lighter and have relatively lower saturation than photographs, we devise a lightness enhancement mapping and a saturation reduction mapping. The lightness mapping is a monotonically decreasing derivative function, which not only increases lightness but also preserves input photograph features. Color saturation is usually related to lightness, so we suppress the saturation dependent on lightness to yield a harmonious tone. Finally, two extremum operators are provided to generate a foreground-aware outline map in which the colors of the generated contours and the foreground object are consistent. Comprehensive experiments show that color pencil drawings generated by our method surpass existing methods in tone capture and feature preservation.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Lewis, D. Pencil Drawing Techniques. Watson-Guptill Publications, 1984.
Jing, Y. C.; Yang, Y. Z.; Feng, Z. L.; Ye, J. W.; Yu, Y. Z.; Song, M. L. Neural style transfer: A review. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 11, 3365–3385, 2020.
Gao, C. Y.; Tang, M. Y.; Liang, X. G.; Su, Z.; Zou, C. Q. PencilArt: A chromatic penciling style generation framework. Computer Graphics Forum Vol. 37, No. 6, 395–409, 2018.
Tong, Z.; Chen, X.; Ni, B.; Wang, X. Sketch generation with drawing process guided by vector flow and grayscale. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence, 609–616, 2021.
Li, Y.; Fang, C.; Hertzmann, A.; Shechtman, E.; Yang, M. Im2Pencil: Controllable pencil illustration from photographs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 1525–1534, 2019.
Lu, C.; Xu, L.; Jia, J. Combining sketch and tone for pencil drawing production. In: Proceedings of the International Symposium on Non-Photorealistic Animation and Rendering, 65–73, 2012.
Chen, D. D.; Yuan, L.; Liao, J.; Yu, N. H.; Hua, G. StyleBank: An explicit representation for neural image style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2770–2779, 2017.
Véliz, Z. Francisco Pacheco’s comments on painting in oil. Studies in Conservation Vol. 27, No. 2, 49–57, 1982.
Kim, G.; Woo, Y.; Yim, C. Color pencil filter for non-photorealistic rendering applications. In: Proceedings of the 18th IEEE International Symposium on Consumer Electronics, 1–2, 2014.
Cole, F.; Golovinskiy, A.; Limpaecher, A.; Barros, H. S.; Finkelstein, A.; Funkhouser, T.; Rusinkiewicz, S. Where do people draw lines? In: Proceedings of the ACM SIGGRAPH 2008 Papers, Article No. 88, 2008.
Li, S.; Li, K.; Kacher, I.; Taira, Y.; Yanatori, B.; Sato, I. ArtPDGAN: Creating artistic pencil drawing with key map using generative adversarial networks. In: Computational Science - ICCS 2020. Lecture Notes in Computer Science, Vol. 12143. Springer Cham, 285–298, 2020.
Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Wardefarley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, Vol. 2, 2672–2680, 2014.
He, K. M.; Sun, J.; Tang, X. O. Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 35, No. 6, 1397–1409, 2013.
Ma, S. P.; Ma, H. Q.; Xu, Y. L.; Li, S. A.; Lv, C.; Zhu, M. M. A low-light sensor image enhancement algorithm based on HSI color model. Sensors Vol. 18, No. 10, 3583, 2018.
Chiang, J.; Hsia, C.; Peng, H.; Lien, C. Color image enhancement with saturation adjustment method. Journal of Applied Science and Engineering Vol. 17, No. 4, 341–352, 2014.
Zhou, J.; Li, B. X. Automatic generation of pencil-sketch like drawings from personal photos. In: Proceedings of the IEEE International Conference on Multimedia and Expo, 1026–1029, 2005.
Son, M.; Kang, H.; Lee, Y. J.; Lee, S. Abstract line drawings from 2D images. In: Proceedings of the 15th Pacific Conference on Computer Graphics and Applications, 333–342, 2007.
Bhat, P.; Zitnick, C.; Cohen, M.; Curless, B. GradientShop: A gradient-domain optimization framework for image and video filtering. ACM Transactions on Graphics, Vol. 29, No. 2, Article No. 10, 2010.
Marr, D.; Hildreth, E. Theory of edge detection. Proceedings of the Royal Society of London. Series B. Biological Sciences Vol. 207, No. 1167, 187–217, 1980.
Winnemöller, H.; Olsen, S. C.; Gooch, B. Real-time video abstraction. ACM Transactions on Graphics Vol. 25, No. 3, 1221–1226, 2006.
Kang, H.; Lee, S.; Chui, C. K. Coherent line drawing. In: Proceedings of the 5th International Symposium on Non-Photorealistic Animation and Rendering, 43–50, 2007.
Spicker, M.; Kratt, J.; Arellano, D.; Deussen, O. Depth-aware coherent line drawings. In: Proceedings of the SIGGRAPH Asia 2015 Technical Briefs, Article No. 1, 2015.
Winnemöller, H.; Kyprianidis, J. E.; Olsen, S. C. XDoG: An eXtended difference-of-Gaussians compendium including advanced image stylization. Computers & Graphics Vol. 36, No. 6, 740–753, 2012.
Jin, Y. X.; Li, P.; Sheng, B.; Nie, Y. W.; Kim, J.; Wu, E. H. SRNPD: Spatial rendering network for pencil drawing stylization. Computer Animation and Virtual Worlds Vol. 30, Nos. 3–4, e1890, 2019.
Li, T.; Xie, J. Y.; Niu, H. L.; Hao, S. J. Enhancing pencil drawing patterns via using semantic information. Multimedia Tools and Applications Vol. 81, No. 24, 34245–34262, 2022.
Inoue, N.; Ito, D.; Xu, N.; Yang, J.; Price, B.; Yamasaki, T. Learning to trace: Expressive line drawing generation from photographs. Computer Graphics Forum Vol. 38, No. 7, 69–80, 2019.
Chen, H.; Liu, Z. Q.; Rose, C.; Xu, Y. Q.; Shum, H. Y.; Salesin, D. Example-based composite sketching of human portraits. In: Proceedings of the 3rd International Symposium on Non-Photorealistic Animation and Rendering, 95–153, 2004.
Yi, R.; Liu, Y. J.; Lai, Y. K.; Rosin, P. L. APDrawingGAN: Generating artistic portrait drawings from face photos with hierarchical GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10735–10744, 2019.
Yi, R.; Xia, M. F.; Liu, Y. J.; Lai, Y. K.; Rosin, P. L. Line drawings for face portraits from photos using global and local structure based GANs. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 43, No. 10, 3462–3475, 2021.
Liao, J.; Yao, Y.; Yuan, L.; Hua, G.; Kang, S. B. Visual attribute transfer through deep image analogy. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 120, 2017.
Isola, P.; Zhu, J. Y.; Zhou, T. H.; Efros, A. A. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5967–5976, 2017.
Zhu, J. Y.; Park, T.; Isola, P.; Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, 2242–2251, 2017.
Huang, Z. Y.; Peng, Y. C.; Hibino, T.; Zhao, C. Q.; Xie, H. R.; Fukusato, T.; Miyata, K. DualFace: Two-stage drawing guidance for freehand portrait sketching. Computational Visual Media Vol. 8, No. 1, 63–77, 2022.
Zhou, L.; Yang, Z. H.; Yuan, Q.; Zhou, Z. T.; Hu, D. W. Salient region detection via integrating diffusion-based compactness and local contrast. IEEE Transactions on Image Processing Vol. 24, No. 11, 3308–3320, 2015.
Wang, D.; Zou, C. Q.; Li, G. Q.; Gao, C. Y.; Su, Z.; Tan, P. L0 gradient-preserving color transfer. Computer Graphics Forum Vol. 36, No. 7, 93–103, 2017.
Chan, C.; Durand, F.; Isola, P. Learning to generate line drawings that convey geometry and semantics. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7905–7915, 2022.
sketchKeras. Available at https://github.com/lllyasviel/sketchKeras.
Sheng, L.; Lin, Z. Y.; Shao, J.; Wang, X. G. Avatar-net: Multi-scale zero-shot style transfer by feature decoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8242–8250, 2018.
Xue, Y.; Guo, Y. C.; Zhang, H.; Xu, T.; Zhang, S. H.; Huang, X. L. Deep image synthesis from intuitive user input: A review and perspectives. Computational Visual Media Vol. 8, No. 1, 3–31, 2022.
Acknowledgements
We thank the reviewers for their valuable comments and constructive suggestions. This work was supported in parts by GD Natural Science Foundation (2021A1515012301, 2022A1515011425), and the Key Research and Development Project of Guangzhou (202206010091, SL2022B03J01235).
Author information
Authors and Affiliations
Corresponding author
Additional information
Declaration of competing interest
The authors have no competing interests to declare that are relevant to the content of this article.
Dong Wang is an associate professor in the Department of Computer Science and Engineering in South China Agricultural University, Guangzhou, China. She received her Ph.D. degree in the Department of Computer Science in City University of Hong Kong in 2012. From 2016 to 2017, she worked in Simon Fraser University. Her research interests include computer vision, image computation, and computer graphics.
Guiqing Li is a professor in the School of Computer Science and Engineering, South China University of Technology (SCUT). Before joining SCUT, he worked as a postdoctoral researcher in the State Key Laboratory of Computer Aided Design and Computer Graphics, Zhejiang University. From 2002 to 2008, he visited City University of Hong Kong several times as a research associate and research fellow. His research interests include dynamic geometry processing, image and video editing, and digital geometry processing.
Chengying Gao is an associate professor in the School of Computer Science and Engineering, Sun Yat-sen University. She received her Ph.D. degree in computer science from the School of Information Science and Technology, Sun Yat-sen University in 2003. Her research interests include computer graphics and image processing.
Shengwu Fu is a master student in the Department of Computer Science and Engineering in South China Agricultural University. He majored in computer science and technology and received his undergraduate degree from South China Agricultural University in 2021. His research interests include computer vision and machine learning.
Yun Liang is a professor in the Department of Computer Science and Engineering in South China Agriculture University. She received her M.Sc. and Ph.D. degrees from the School of Information Science and Technology, Sun Yat-sen University in 2005 and 2011, respectively. From 2016 to 2017, she worked in Simon Fraser University. Her research interests include computer vision, image computation, and machine learning.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.
To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.
About this article
Cite this article
Wang, D., Li, G., Gao, C. et al. Feature-preserving color pencil drawings from photographs. Comp. Visual Media 9, 807–825 (2023). https://doi.org/10.1007/s41095-022-0320-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41095-022-0320-6