Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleApril 2023
“Where is history”: Toward Designing a Voice Assistant to help Older Adults locate Interface Features quickly
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing SystemsArticle No.: 849, Pages 1–19https://doi.org/10.1145/3544548.3581447Older adults often struggle to locate a function quickly in feature-rich user interfaces (UIs). Mobile UIs not only pack a ton of features in a small screen but also get frequent updates to their visual layouts—thereby exacerbating the problem. This ...
- research-articleApril 2023
From User Perceptions to Technical Improvement: Enabling People Who Stutter to Better Use Speech Recognition
- Colin Lea,
- Zifang Huang,
- Jaya Narain,
- Lauren Tooley,
- Dianna Yee,
- Dung Tien Tran,
- Panayiotis Georgiou,
- Jeffrey P Bigham,
- Leah Findlater
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing SystemsArticle No.: 361, Pages 1–16https://doi.org/10.1145/3544548.3581224Consumer speech recognition systems do not work as well for many people with speech differences, such as stuttering, relative to the rest of the general population. However, what is not clear is the degree to which these systems do not work, how they ...
- research-articleNovember 2022
NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 6, Issue ISSArticle No.: 581, Pages 568–591https://doi.org/10.1145/3567734Speech as a natural and low-burden input modality has great potential to support personal data capture. However, little is known about how people use speech input, together with traditional touch input, to capture different types of data in self-tracking ...
- posterDecember 2021
eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues
SA '21 Posters: SIGGRAPH Asia 2021 PostersArticle No.: 16, Pages 1–2https://doi.org/10.1145/3476124.3488618In this poster we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the ...
- extended-abstractJune 2021
Designing Multimodal Self-Tracking Technologies to Promote Data Capture and Self-Reflection
DIS '21 Companion: Companion Publication of the 2021 ACM Designing Interactive Systems ConferencePages 11–15https://doi.org/10.1145/3468002.3468232Self-tracking is a powerful means to help individuals monitor and improve their behaviors. While numerous tracking technologies are available, it has been challenging to lower the tracking burden whilst promoting reflection. This is because low-burden ...
-
- research-articleJune 2021
FoodScrap: Promoting Rich Data Capture and Reflective Food Journaling Through Speech Input
DIS '21: Proceedings of the 2021 ACM Designing Interactive Systems ConferencePages 606–618https://doi.org/10.1145/3461778.3462074The factors influencing people’s food decisions, such as one’s mood and eating environment, are important information to foster self-reflection and to develop personalized healthy diet. But, it is difficult to consistently collect them due to the heavy ...
- research-articleMay 2021
Spoken Conversational Context Improves Query Auto-completion in Web Search
ACM Transactions on Information Systems (TOIS), Volume 39, Issue 3Article No.: 31, Pages 1–32https://doi.org/10.1145/3447875Web searches often originate from conversations in which people engage before they perform a search. Therefore, conversations can be a valuable source of context with which to support the search process. We investigate whether spoken input from ...
- demonstrationOctober 2020
Supporting Older Adults in Locating Mobile Interface Features with Voice Input
ASSETS '20: Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and AccessibilityArticle No.: 97, Pages 1–4https://doi.org/10.1145/3373625.3418044As mobile applications continue to offer more features, tackling the complexity of mobile interfaces can become challenging for older adults. Owing to a small screen and frequent updates that modify the visual layouts of menus and buttons, older adults ...
- research-articleApril 2020
Non-Verbal Auditory Input for Controlling Binary, Discrete, and Continuous Input in Automotive User Interfaces
CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing SystemsPages 1–13https://doi.org/10.1145/3313831.3376816Using auditory input while driving is becoming increasingly popular for making distraction-free inputs while driving. However, we argue that auditory input is more than just using speech. Thus, in this work, we explore using Non-Verbal Auditory Input (...
- research-articleOctober 2018
- research-articleAugust 2018
A Smart Fridge for Efficient Foodstuff Management with Weight Sensor and Voice Interface
ICPP Workshops '18: Workshop Proceedings of the 47th International Conference on Parallel ProcessingArticle No.: 2, Pages 1–7https://doi.org/10.1145/3229710.3229727Emerging smart appliances provide us with the opportunity of understanding peoples' in-depth daily life behavior patterns, through analyzing their operation states. Smart fridges, for example, can help us estimate the timing and content of meals of ...
- research-articleOctober 2014
Evaluating multimodal interaction with gestures and speech for point and select tasks
NordiCHI '14: Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, FoundationalPages 1027–1030https://doi.org/10.1145/2639189.2670267Natural interactions such as speech and gestures have achieved mainstream success independently, with consumer products such as Leap Motion popularizing gestures, while mobile phones have embraced speech input. In this paper we designed an interaction ...
- posterDecember 2013
SpeeG2: a speech- and gesture-based interface for efficient controller-free text input
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 213–220https://doi.org/10.1145/2522848.2522861With the emergence of smart TVs, set-top boxes and public information screens over the last few years, there is an increasing demand to no longer use these appliances only for passive output. These devices can also be used to do text-based web search as ...
- posterFebruary 2013
A voice-based input system for embedded applications with constrained operating conditions
CSCW '13: Proceedings of the 2013 conference on Computer supported cooperative work companionPages 111–114https://doi.org/10.1145/2441955.2441985This paper presents a voice-based input mechanism for embedded applications with limited computing power and requiring only a small set of inputs, but with the user constrained in not being able to user her or his hands. The requirement of designing for ...
- research-articleApril 2011
MozArt: an immersive multimodal CAD system for 3D modeling
IndiaHCI '11: Proceedings of the 3rd Indian Conference on Human-Computer InteractionPages 97–100https://doi.org/10.1145/2407796.24078123D modeling has been revolutionized in recent years by the advent of computers. While computers have become much more affordable and accessible to the masses, computer modeling remains a complex task involving a steep learning curve and extensive ...
- short-paperSeptember 2010
A comparison of speech and GUI input for navigation in complex visualizations on mobile devices
MobileHCI '10: Proceedings of the 12th international conference on Human computer interaction with mobile devices and servicesPages 357–360https://doi.org/10.1145/1851600.1851665Mobile devices are ubiquitously used to access web applications. Multimodal mobile interfaces can offer advantages over less flexible approaches, in both usability and range of features. In this study we consider applying speech input to a web-based ...
- demonstrationFebruary 2009
Parakeet: a demonstration of speech recognition on a mobile touch-screen device
IUI '09: Proceedings of the 14th international conference on Intelligent user interfacesPages 483–484https://doi.org/10.1145/1502650.1502726We demonstrate Parakeet -- a continuous speech recognition system for mobile touch-screen devices. Parakeet's interface is designed to make correcting errors easy on a handheld device while on the move. Users correct errors using a touch-screen to ...
- research-articleFebruary 2009
Parakeet: a continuous speech recognition system for mobile touch-screen devices
IUI '09: Proceedings of the 14th international conference on Intelligent user interfacesPages 237–246https://doi.org/10.1145/1502650.1502685We present Parakeet, a system for continuous speech recognition on mobile touch-screen devices. The design of Parakeet was guided by computational experiments and validated by a user study. Participants had an average text entry rate of 18 words-per-...
- research-articleSeptember 2008
Evaluating the appropriateness of speech input in marine applications: a field evaluation
MobileHCI '08: Proceedings of the 10th international conference on Human computer interaction with mobile devices and servicesPages 343–346https://doi.org/10.1145/1409240.1409284This paper discusses the first of three studies which collectively represent a convergence of two ongoing research agendas: (1) the empirically-based comparison of the effects of evaluation environment on mobile usability evaluation results; and (2) the ...
- short-paperNovember 2007
A model for multimodal representation and processing for reference resolution
WMISI '07: Proceedings of the 2007 workshop on Multimodal interfaces in semantic interactionPages 39–42https://doi.org/10.1145/1330572.1330578We present a model for dealing with designation activities of a user in multimodal systems. This model associates both a well defined language to each modality (NL, gesture, visual) and a mediator one. It takes into account several semantic features of ...