Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1378773.1378825acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
short-paper

Multimodal Chinese text entry with speech and keypad on mobile devices

Published: 13 January 2008 Publication History

Abstract

Chinese text entry is challenging on mobile devices which rely on keypad input. Entering one character may require many key presses. This paper proposes a multimodal text entry technique for Chinese. In this method, Chinese user can enter Chinese text by simultaneously using the simplified phonemic input method named Jianpin with keypad and speech utterance. The key of the technique is a multimodal fusion algorithm, which synchronizes speech and keypad input and fuses redundant information from two modalities to get the best candidate. A preliminary evaluation shows that users appreciate this technique and it could reduce key presses and enhance the input efficiency.

References

[1]
Ao, X., Wang, X. G., Tian, F., Dai, G. Z. and Wang, H. A. Crossmodal error dorrection of continuous handwriting recognition by speech. In Proc. IUI 2007. ACM Press (2007), 243--250.
[2]
Chen, Y., Chinese Language Processing, Shanghai Education publishing company, China, 1997.
[3]
Hsu, B. J., Mahajan, M. and Acero, A. Multimodal Text Entry on Mobile Devices. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU) 2005
[4]
Hsu, S. C. A flexible Chinese character input scheme. In Proc. UIST 1991. ACM Press (1991), 195--200.
[5]
Kaiser, E. C. Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations. In Proc ICMI 2006. ACM Press (2006), 347--356
[6]
Kurihara, K., Goto, M., Ogata, J. and Igarashi, T. Speech Pen: Predictive Handwriting based on Ambient Multimodal Recognition. In Proc. CHI 2006. ACM Press (2006), 851--860.
[7]
Lin, M., and Sears, A. Graphics matter: a case study of mobile phone keypad design for Chinese input. Ext. Abstracts 2005. ACM Press (2005), 1593--1596
[8]
Oviatt, S. Ten Myths of Multimodal Interaction, Communications of the ACM, 42, 9(1999), 74--81
[9]
T9. http://www.nuance.com/t9/textinput/
[10]
Vertanen, K. Efficient Computer Interfaces Using Continuous Gestures, Language Models, and Speech. M. Phil Thesis, University of Cambridge, 2004
[11]
Wang, J. T., Zhai, S. M. and Su, H. Chinese input with keyboard and eye-tracking -- an anatomical study. In Proc. CHI 2001. ACM Press (2001), 349--356.
[12]
Wang, X. G., Li, J. F., Ao, X., Wang, G. and Dai, G. Z. Multimodal Error Correction for Continuous Handwriting Recognition in Pen-based user Interfaces. In Proc IUI 2006. ACM Press (2006), 324--326
[13]
Zhai, S. M., Kristensson, P. O. and Smith, B. A. In search of effective text input interfaces for off the desktop computing. Interacting with Computers. 17, 3 (2005), 229--250.

Cited By

View all
  • (2012)Hype or Ready for Prime Time?International Journal of Handheld Computing Research10.4018/jhcr.20121001033:4(40-55)Online publication date: 1-Oct-2012
  • (2010)Multi-modal text entry and selection on a mobile deviceProceedings of Graphics Interface 201010.5555/1839214.1839219(19-26)Online publication date: 31-May-2010
  • (2010)Natural behavioral patterns of speech recognition error recovery2010 Sixth International Conference on Natural Computation10.1109/ICNC.2010.5582480(2135-2138)Online publication date: Aug-2010

Index Terms

  1. Multimodal Chinese text entry with speech and keypad on mobile devices

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '08: Proceedings of the 13th international conference on Intelligent user interfaces
    January 2008
    458 pages
    ISBN:9781595939876
    DOI:10.1145/1378773
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 January 2008

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Chinese text entry
    2. keypad
    3. mobile devices
    4. multimodal
    5. speech

    Qualifiers

    • Short-paper

    Funding Sources

    Conference

    IUI08
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Upcoming Conference

    IUI '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 19 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2012)Hype or Ready for Prime Time?International Journal of Handheld Computing Research10.4018/jhcr.20121001033:4(40-55)Online publication date: 1-Oct-2012
    • (2010)Multi-modal text entry and selection on a mobile deviceProceedings of Graphics Interface 201010.5555/1839214.1839219(19-26)Online publication date: 31-May-2010
    • (2010)Natural behavioral patterns of speech recognition error recovery2010 Sixth International Conference on Natural Computation10.1109/ICNC.2010.5582480(2135-2138)Online publication date: Aug-2010

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media