Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2647868.2654911acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

From Writing to Painting: A Kinect-Based Cross-Modal Chinese Painting Generation System

Published: 03 November 2014 Publication History

Abstract

As computer and interaction technologies mature, a much broader range of media is now used for input and output, each of which has its own rich repertoire of techniques, instruments, and cultural heritage. The combination of multiple media can produce novel multimedia human-computer interaction approaches which are more efficient and interesting than traditional single media methods. This paper presents CalliPaint, a system for cross-modal art generation that links together Chinese ink brush calligraphy writing and Chinese landscape painting. We investigate the mapping between the two modalities based on concepts of metaphoric congruence, and implement our findings into a prototype system. A multi-step evaluation experiment with real users suggests that CalliPaint provides a realistic and intuitive experience that allows even novice users to create attractive landscape paintings from writing. Comparison with a general-purpose digital painting software suggests that CalliPaint provides users with a more enjoyable experience. Finally, exhibiting CalliPaint in an open-access location for use by casual users without any training shows that the system is easy to learn.

References

[1]
Barnhart, R., Yang, X., Nie, C., Cahill, J., Lang, S. and Wu, H. 2002. Three Thousand Years of Chinese Painting. New Haven, CT: Yale University Press.
[2]
Baxter B, Scheib V, Lin MC, Manocha D. 2001. DAB: interactive haptic painting with 3D virtual brushes. In Proc. of SIGGRAPH'01, pp 461--468.
[3]
Bi X., Moscovich, T., Ramos, G., Balakrishnam, R. and Hinckley, K. 2008. An exploration of pen rolling for pen-based interaction. In Proc. of UIST'08, pp 191--200.
[4]
Brandl P., Forlines, C., Wigdor, D., Haller, M. and Shen, C. 2008. Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces. AVI'08, pp 154--161.
[5]
Cao X, Lindley SE, Helmes J, Sellen A. 2010. Telling the whole story: anticipation, inspiration and reputation in a field deployment of TellTable. In Proc. of CSCW'10, pp 251--260.
[6]
Chu, NS, Tai, CL. 2004. Real-Time Painting with an Expressive Virtual Chinese Brush. IEEE Comput. Graph. Appl. Mag., 24(5): 76--85.
[7]
Dolinsky J, Takagi H. 2007. Synthesizing Handwritten Characters Using Naturalness Learning. In Proc. of ICCC'07, pp.101--106.
[8]
Ebrey, P.B. 2010. The Cambridge Illustrated History of China. Cambridge: University Press; 2nd edition.
[9]
Guo Q, Kunii, TL. 2003. Nijimi rendering algorithm for creating quality black ink paintings. In Proc. of CGI'03, pp 152--159.
[10]
Huang, M.X., Tang, W., Lo, K.W.K., Lau, C.K., Ngai, G. and Chan, S. 2012. MelodicBrush: A Novel System for Cross-Modal Digital Art Creation Linking Calligraphy and Music. DIS'12, pp 418--427.
[11]
Kalal Z, Mikolajczyk K, and Matas J. 2010. Face-TLD: Tracking-Learning-Detection Applied to Faces. In Proc. of ICIP'10, pp 3789--3792.
[12]
Kang L, Chien HY. 2010. Hé: Calligraphy as a Musical Interface. In Proc. of NIME'12, pp 352--355.
[13]
Kang L, Gu T, Gay G. 2013. Harmonic paper. In CHI EA'13, pp 763--768.
[14]
Kazi, R.H., Chua, K.C. 2011.SandCanvas: a multi-touch art medium inspired by sand animation. In Proc. of CHI'11, pp 1283--1292.
[15]
Kazi, R.H., Igarashi, R., Zhao, S. and Davis, R. 2012. Vignette: interactive texture design and manipulation with freeform gestures for pen-and-ink illustration. In Proc. of CHI'12, pp 1727--1736.
[16]
Knorig, A., Muller, B., Wettach, R. 2007. Articulated paint: musical expression for non-musicians. In Proc. of NIME'07, pp 384--385.
[17]
Jo, K. 2008. DrawSound: a drawing instrument for sound performance. In Proc. of TEI'08, pp 59--62.
[18]
Lopes P, Mendes D, Araújo B, Jorge JA. 2011. Combining bimanual manipulation and pen-based input for 3D modelling. In Proc. of SBIM'11, pp 15--22.
[19]
Lu, F., Tian, F., Jiang, Y., Cao, X., Luo, W., Li, G., Zhang, X., Dai, G. and Wang, H. 2011. ShadowStory: creative and collaborative digital storytelling inspired by cultural heritage. In Proc. of CHI'11, pp 1919--1928.
[20]
Maurer D, Pathman T, Mondloch CJ. 2006. The shape of boubas: Sound-shape correspondences in toddlers and adults. Developmental Science, 9(3): 316--322.
[21]
Mi, XF, Tang M, Dong JX. 2004. Droplet: a virtual brush model to simulate Chinese calligraphy and painting. J. Computing Science Technologies 19(3): 393--404.
[22]
Otsuki, M., Sugihara, K., Kimura, A., Shibata, F., and Tamura, H. 2010. MAI painting brush: an interactive device that realizes the feeling of real painting. In Proc. of UIST'10, pp 97--100.
[23]
Raffle H, Vaucelle, C., Wang, R., Ishii, H. 2007. Jabberstamp: embedding sound and voice in traditional drawings. In Proc. of IDC'07, pp 137--144.
[24]
Ramachandran VS, Hubbard EM. 2001. Synaesthesia: A window into perception, thought and language. Journal of Consciousness Studies, vol. 8, pp 3--34.
[25]
Ryokai K, Marti S, Ishii H. 2004. I/O brush: drawing with everyday objects as ink. In Proc. of CHI'04, pp 303--310.
[26]
Schkolne S. 2002. Drawing with the Hand in Free Space: Creating 3D Shapes with Gesture in a Semi-Immersive Environment. Leonardo, 35(4): 371--375.
[27]
Shen IF, Ma KL, Stompel A. 2003. A new form of traditional art -- visual simulation of Chinese Shadow play. In SIGGRAPH'03 Sketches & Applications, pp 1--1.
[28]
Song, H., Benko, H., Guimbretière, F., Izadi, S., Cao, X. and Hinckley, K. 2011. Grips and gestures on a multi-touch pen. In Proc. of CHI'11, pp 1323--1332.
[29]
Strassmann S. 1986. Hairy brush. In: Proc. of SIGGRAPH'86, pp 225--232.
[30]
Tosa, N., Matsuoka, S., Ellis, B., Ueda, H., Nakatsu, R. 2005. Cultural Computing with Context Aware Application: ZENetic Computer. In Proc. of ICEC'05, pp 13--23.
[31]
Vandoren, P., Claesen, L., Van Laerhoven, T., Taelman, J., Raymaekers, C., Flerackers, E., and Van Reeth, F. 2009. FluidPaint: an interactive digital painting system using real wet brushes. In Proc. of ITS'09, pp 53--56.
[32]
Wanderley MM. and Depalle P. 2004. Gestural control of sound synthesis. In Proc. IEEE 2004, 92(4): 632--644.
[33]
Wang L, Zhang C, Yang R, Zhang C. 2010. Tofcut: towards robust real-time foreground extraction using a time-of-flight camera. In P Proc. of DPVT'10.
[34]
Wilson AD. 2010. Using a depth camera as a touch sensor. In: Proc. of ITS'10, pp 69--72.
[35]
Wong HTF and Ip HH. 2000. Virtual brush: a model-based synthesis of Chinese calligraphy. Computers & Graphics, 24(1): 99--113.
[36]
Xin Y, Bi X and Ren X. 2011. Acquiring and pointing: an empirical study of pen-tilt-based interaction. In Proc. of CHI'11, pp 849--858.
[37]
Xu, S., Jiang, H., Lau, F.C.M. and Pan, Y. 2007. An Intelligent system for Chinese Calligraphy. In Proc. of AAAI'07, pp 1578--1583.
[38]
Xu S, Lau, FCM, Xu C, Pan Y. 2005. Virtual hairy brush for digital painting and calligraphy. Science in China, Series F: Information Science (Springer), 48(3): 285--303.
[39]
Yu JH, Luo GM, Peng QS. 2003 Image-based synthesis of Chinese landscape painting, Journal of Computer Science and Technology, vol. 18, pp. 22--28.
[40]
Zhu Y, Li C, Shen, IF. 2004. A new style of ancient culture: Animated Chinese Dunhuang murals. In SIGGRAPH'04 pp 130.

Cited By

View all
  • (2024)Digital Mustard Garden: Revitalizing Freehand-ink-painting Teaching through Artistic ParticipationProceedings of the 17th International Symposium on Visual Information Communication and Interaction10.1145/3678698.3687184(1-8)Online publication date: 11-Dec-2024
  • (2024)Computational Approaches for Traditional Chinese Painting: From the “Six Principles of Painting” PerspectiveJournal of Computer Science and Technology10.1007/s11390-024-3408-x39:2(269-285)Online publication date: 1-Mar-2024
  • (2023)XCreation: A Graph-based Crossmodal Generative Creativity Support ToolProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606826(1-15)Online publication date: 29-Oct-2023
  • Show More Cited By

Index Terms

  1. From Writing to Painting: A Kinect-Based Cross-Modal Chinese Painting Generation System

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '14: Proceedings of the 22nd ACM international conference on Multimedia
    November 2014
    1310 pages
    ISBN:9781450330633
    DOI:10.1145/2647868
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 November 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. creativity
    2. cross-modal interaction
    3. painting
    4. writing

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    MM '14
    Sponsor:
    MM '14: 2014 ACM Multimedia Conference
    November 3 - 7, 2014
    Florida, Orlando, USA

    Acceptance Rates

    MM '14 Paper Acceptance Rate 55 of 286 submissions, 19%;
    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)39
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 24 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Digital Mustard Garden: Revitalizing Freehand-ink-painting Teaching through Artistic ParticipationProceedings of the 17th International Symposium on Visual Information Communication and Interaction10.1145/3678698.3687184(1-8)Online publication date: 11-Dec-2024
    • (2024)Computational Approaches for Traditional Chinese Painting: From the “Six Principles of Painting” PerspectiveJournal of Computer Science and Technology10.1007/s11390-024-3408-x39:2(269-285)Online publication date: 1-Mar-2024
    • (2023)XCreation: A Graph-based Crossmodal Generative Creativity Support ToolProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606826(1-15)Online publication date: 29-Oct-2023
    • (2023)InfinitePaint: Painting in Virtual Reality with Passive Haptics Using Wet Brushes and a Physical Proxy CanvasProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580927(1-13)Online publication date: 19-Apr-2023
    • (2016)Gesture based human motion and game principles to aid understanding of science and cultural practicesMultimedia Tools and Applications10.1007/s11042-015-2667-575:19(11699-11722)Online publication date: 1-Oct-2016

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media