Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3581641.3584077acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article
Open access

FlexType: Flexible Text Input with a Small Set of Input Gestures

Published: 27 March 2023 Publication History

Abstract

In many situations, it may be impractical or impossible to enter text by selecting precise locations on a physical or touchscreen keyboard. We present an ambiguous keyboard with four character groups that has potential applications for eyes-free text entry, as well as text entry using a single switch or a brain-computer interface. We develop a procedure for optimizing these character groupings based on a disambiguation algorithm that leverages a long-span language model. We produce both alphabetically-constrained and unconstrained character groups in an offline optimization experiment and compare them in a longitudinal user study. Our results did not show a significant difference between the constrained and unconstrained character groups after four hours of practice. As expected, participants had significantly more errors with the unconstrained groups in the first session, suggesting a higher barrier to learning the technique. We therefore recommend the alphabetically-constrained character groups, where participants were able to achieve an average entry rate of 12.0 words per minute with a 2.03% character error rate using a single hand and with no visual feedback.

References

[1]
Ali H. Al-Timemy, Guido Bugmann, Javier Escudero, and Nicholas Outram. 2013. Classification of Finger Movements for the Dexterous Hand Prosthesis Control With Surface Electromyography. IEEE Journal of Biomedical and Health Informatics 17, 3(2013), 608–618. https://doi.org/10.1109/JBHI.2013.2249590
[2]
Shiri Azenkot, Jacob O. Wobbrock, Sanjana Prasain, and Richard E. Ladner. 2012. Input Finger Detection for Nonvisual Touch Screen Text Entry in Perkinput. In Proceedings of Graphics Interface 2012 (Toronto, Ontario, Canada) (GI ’12). Canadian Information Processing Society, CAN, 121–129.
[3]
Xiaojun Bi, Barton A. Smith, and Shumin Zhai. 2010. Quasi-Qwerty Soft Keyboard Optimization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI ’10). Association for Computing Machinery, New York, NY, USA, 283–286. https://doi.org/10.1145/1753326.1753367
[4]
Nicholas Ryan Bonaker, Emli-Mari Nel, Keith Vertanen, and Tamara Broderick. 2022. A Performance Evaluation of Nomon: A Flexible Interface for Noisy Single-Switch Users. In CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 495, 17 pages. https://doi.org/10.1145/3491102.3517738
[5]
Matthew N. Bonner, Jeremy T. Brudvik, Gregory D. Abowd, and W. Keith Edwards. 2010. No-Look Notes: Accessible Eyes-Free Multi-touch Text Entry. In Pervasive Computing, Patrik Floréen, Antonio Krüger, and Mirjana Spasojevic (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 409–426. https://doi.org/10.1007/978-3-642-12654-3_24
[6]
D. A Bowman, C. A Wingrave, J. M Campbell, V. Q Ly, and C. J Rhoton. 2002. Novel Uses of Pinch Gloves™ for Virtual Environment Interaction Techniques. Virtual reality : the journal of the Virtual Reality Society 6, 3(2002), 122–129. https://doi.org/10.1007/s100550200013
[7]
Tamara Broderick and David J. C. MacKay. 2009. Fast and Flexible Selection with a Single Switch. PLoS ONE 4, 10 (2009). https://doi.org/10.1371/journal.pone.0007481
[8]
Mattia De Rosa, Vittorio Fuccella, Gennaro Costagliola, Giuseppe Adinolfi, Giovanni Ciampi, Antonio Corsuto, and Donato Di Sapia. 2020. T18: an ambiguous keyboard layout for smartwatches. In 2020 IEEE International Conference on Human-Machine Systems (ICHMS). 1–4. https://doi.org/10.1109/ICHMS49158.2020.9209483
[9]
Mark Dunlop and John Levine. 2012. Multidimensional Pareto Optimization of Touchscreen Keyboards for Speed, Familiarity and Improved Spell Checking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). Association for Computing Machinery, New York, NY, USA, 2669–2678. https://doi.org/10.1145/2207676.2208659
[10]
L.A. Farwell and E. Donchin. 1988. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and clinical neurophysiology 70, 6(1988), 510–523. https://doi.org/10.1016/0013-4694(88)90149-6
[11]
Dylan Gaines. 2018. Exploring an Ambiguous Technique for Eyes-Free Mobile Text Entry. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility(Galway, Ireland) (ASSETS ’18). Association for Computing Machinery, New York, NY, USA, 471–473. https://doi.org/10.1145/3234695.3240991
[12]
Jun Gong and Peter Tarasewich. 2005. Alphabetically Constrained Keypad Designs for Text Entry on Mobile Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Portland, Oregon, USA) (CHI ’05). Association for Computing Machinery, New York, NY, USA, 211–220. https://doi.org/10.1145/1054972.1055002
[13]
Cuntai Guan, M. Thulasidas, and Jiankang Wu. 2004. High performance P300 speller for brain-computer interface. In IEEE International Workshop on Biomedical Circuits and Systems, 2004. IEEE, S3/5/INV–S3/13. https://doi.org/10.1109/BIOCAS.2004.1454155
[14]
Matt Higger, Fernando Quivira, Murat Akcakaya, Mohammad Moghadamfalahi, Hooman Nezamfar, Mujdat Cetin, and Deniz Erdogmus. 2017. Recursive Bayesian Coding for BCIs. IEEE Transactions on Neural Systems and Rehabilitation Engineering 25, 6(2017), 704–714. https://doi.org/10.1109/TNSRE.2016.2590959
[15]
Howell Owen Istance, Christian Spinner, and Peter Alan Howarth. 1996. Providing motor impaired users with access to standard Graphical User Interface (GUI) software via eye-based interaction. In Proceedings of the 1st european conference on disability, virtual reality and associated technologies (ECDVRAT’96).
[16]
Haiyan Jiang, Dongdong Weng, Zhenliang Zhang, and Feng Chen. 2019. HiFinger: One-Handed Text Entry Technique for Virtual Environments Based on Touches between Fingers. Sensors (Basel, Switzerland) 19, 14 (2019), 3063–3086. https://doi.org/10.3390/s19143063
[17]
Shaun K. Kane, Jeffrey P. Bigham, and Jacob O. Wobbrock. 2008. Slide Rule: Making Mobile Touch Screens Accessible to Blind People Using Multi-Touch Interaction Techniques. In Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility (Halifax, Nova Scotia, Canada) (Assets ’08). Association for Computing Machinery, New York, NY, USA, 73–80. https://doi.org/10.1145/1414471.1414487
[18]
DoYoung Lee, Jiwan Kim, and Ian Oakley. 2021. FingerText: Exploring and Optimizing Performance for Wearable, Mobile and One-Handed Typing. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 283, 15 pages. https://doi.org/10.1145/3411764.3445106
[19]
Gregory W. Lesher, Bryan J. Moulton, and D. Jeffery Higginbotham. 1998. Optimal character arrangements for ambiguous keyboards. IEEE Transactions on Rehabilitation Engineering 6, 4(1998), 415–423. https://doi.org/10.1109/86.736156
[20]
Yun-Lung Lin, Ting-Fang Wu, Ming-Chung Chen, Yao-Ming Yeh, and Hwa-Pey Wang. 2008. Designing a Scanning On-Screen Keyboard for People with Severe Motor Disabilities. In Computers Helping People with Special Needs, Klaus Miesenberger, Joachim Klaus, Wolfgang Zagler, and Arthur Karshmer (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 1184–1187. https://doi.org/10.1007/978-3-540-70540-6_178
[21]
Mateus M Luna, Hugo AD Nascimento, Aaron Quigley, and Fabrizzio Soares. 2022. Text entry for the Blind on Smartwatches: A study of Braille code input methods for a novel device. Universal Access in the Information Society(2022), 1–19. https://doi.org/10.1007/s10209-022-00870-2
[22]
I. Scott Mackenzie and Torsten Felzer. 2010. SAK: Scanning Ambiguous Keyboard for Efficient One-Key Text Entry. ACM Trans. Comput.-Hum. Interact. 17, 3, Article 11 (jul 2010), 39 pages. https://doi.org/10.1145/1806923.1806925
[23]
Sergio Mascetti, Cristian Bernareggi, and Matteo Belotti. 2012. TypeInBraille: Quick Eyes-Free Typing on Smartphones. In Computers Helping People with Special Needs, Klaus Miesenberger, Arthur Karshmer, Petr Penaz, and Wolfgang Zagler (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 615–622. https://doi.org/10.1007/978-3-642-31534-3_90
[24]
Katsumi Minakata, John Paulin Hansen, I. Scott MacKenzie, Per Bækgaard, and Vijay Rajanna. 2019. Pointing by Gaze, Head, and Foot in a Head-Mounted Display. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (Denver, Colorado) (ETRA ’19). Association for Computing Machinery, New York, NY, USA, Article 69, 9 pages. https://doi.org/10.1145/3317956.3318150
[25]
Joao Oliveira, Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves. 2011. BrailleType: Unleashing Braille over Touch Screen Mobile Phones. In Proceedings of the 13th IFIP TC 13 International Conference on Human-Computer Interaction - Volume Part I(Lisbon, Portugal) (INTERACT’11). Springer-Verlag, Berlin, Heidelberg, 100–107. https://doi.org/10.1007/978-3-642-23774-4_10
[26]
Ryan Qin, Suwen Zhu, Yu-Hao Lin, Yu-Jung Ko, and Xiaojun Bi. 2018. Optimal-T9: An Optimized T9-like Keyboard for Small Touchscreen Devices. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces(Tokyo, Japan) (ISS ’18). Association for Computing Machinery, New York, NY, USA, 137–146. https://doi.org/10.1145/3279778.3279786
[27]
Sayan Sarcar, Prateek Panwar, and Tuhin Chakraborty. 2013. EyeK: An Efficient Dwell-Free Eye Gaze-Based Text Entry System. In Proceedings of the 11th Asia Pacific Conference on Computer Human Interaction (Bangalore, India) (APCHI ’13). Association for Computing Machinery, New York, NY, USA, 215–220. https://doi.org/10.1145/2525194.2525288
[28]
Caleb Southern, James Clawson, Brian Frey, Gregory Abowd, and Mario Romero. 2012. An Evaluation of BrailleTouch: Mobile Touchscreen Text Entry for the Visually Impaired. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services (San Francisco, California, USA) (MobileHCI ’12). Association for Computing Machinery, New York, NY, USA, 317–326. https://doi.org/10.1145/2371574.2371623
[29]
Hussain Tinwala and I. Scott MacKenzie. 2010. Eyes-Free Text Entry with Error Correction on Touchscreen Mobile Devices. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries(Reykjavik, Iceland) (NordiCHI ’10). Association for Computing Machinery, New York, NY, USA, 511–520. https://doi.org/10.1145/1868914.1868972
[30]
Horabail S. Venkatagiri. 1999. Efficient keyboard layouts for sequential access in augmentative and alternative communication: AAC. Augmentative and Alternative Communication 15, 2 (06 1999), 126. https://doi.org/10.1080/07434619912331278625 Copyright - Copyright Decker Periodicals, Inc. Jun 1999; Last updated - 2022-11-16; CODEN - AAACEC.
[31]
Keith Vertanen and Per Ola Kristensson. 2011. A Versatile Dataset for Text Entry Evaluations Based on Genuine Mobile Emails. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (Stockholm, Sweden) (MobileHCI ’11). Association for Computing Machinery, New York, NY, USA, 295–298. https://doi.org/10.1145/2037373.2037418
[32]
Keith Vertanen and Per Ola Kristensson. 2021. Mining, Analyzing, and Modeling Text Written on Mobile Devices. Natural Language Engineering 27 (2021), 1–33. https://doi.org/10.1017/S1351324919000548
[33]
Keith Vertanen, Haythem Memmi, Justin Emge, Shyam Reyal, and Per Ola Kristensson. 2015. VelociTap: investigating fast mobile text entry using sentence-based decoding of touchscreen keyboard input. In CHI ’15: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seoul, Korea). Association for Computing Machinery, New York, NY, USA, 659–668. https://doi.org/10.1145/2702123.2702135
[34]
Zheer Xu, Pui Chung Wong, Jun Gong, Te-Yen Wu, Aditya Shekhar Nittala, Xiaojun Bi, Jürgen Steimle, Hongbo Fu, Kening Zhu, and Xing-Dong Yang. 2019. TipText: Eyes-Free Text Entry on a Fingertip Keyboard. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 883–899. https://doi.org/10.1145/3332165.3347865

Cited By

View all
  • (2024)Improving FlexType: Ambiguous Text Input for Users with Visual ImpairmentsProceedings of the 17th International Conference on PErvasive Technologies Related to Assistive Environments10.1145/3652037.3652059(130-139)Online publication date: 26-Jun-2024
  • (2024)Unblind Text Inputs: Predicting Hint-text of Text Input in Mobile Apps via LLMProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642939(1-20)Online publication date: 11-May-2024
  • (2024)CRTypist: Simulating Touchscreen Typing Behavior via Computational RationalityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642918(1-17)Online publication date: 11-May-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '23: Proceedings of the 28th International Conference on Intelligent User Interfaces
March 2023
972 pages
ISBN:9798400701061
DOI:10.1145/3581641
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 March 2023

Check for updates

Author Tags

  1. accessibility
  2. human-computer interaction
  3. mobile keyboard
  4. text entry

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

IUI '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)234
  • Downloads (Last 6 weeks)38
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Improving FlexType: Ambiguous Text Input for Users with Visual ImpairmentsProceedings of the 17th International Conference on PErvasive Technologies Related to Assistive Environments10.1145/3652037.3652059(130-139)Online publication date: 26-Jun-2024
  • (2024)Unblind Text Inputs: Predicting Hint-text of Text Input in Mobile Apps via LLMProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642939(1-20)Online publication date: 11-May-2024
  • (2024)CRTypist: Simulating Touchscreen Typing Behavior via Computational RationalityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642918(1-17)Online publication date: 11-May-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media