Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3544548.3580770acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Crownboard: A One-Finger Crown-Based Smartwatch Keyboard for Users with Limited Dexterity

Published: 19 April 2023 Publication History

Abstract

Mobile text entry is difficult for people with motor impairments due to limited access to smartphones and the need for precise target selection on touchscreens. Text entry on smartwatches, on the other hand, has not been well explored for the population. Crownboard enables people with limited dexterity enter text on a smartwatch using its crown. It uses an alphabetical layout divided into eight zones around the bezel. The zones are scanned either automatically or manually by rotating the crown, then selected by pressing the crown. Crownboard decodes zone sequences into words and displays word suggestions. We validated its design in multiple studies. First, a comparison between manual and automated scanning revealed that manual scanning is faster and more accurate. Second, a comparison between clockwise and shortest-path scanning identified the former to be faster and more accurate. In the final study with representative users, only 30% participants could use the default Qwerty. They were 9% and 23% faster with manual and automated Crownboard, respectively. All participants were able to use both variants of Crownboard.
Figure 1:
Figure 1: Crownboard arranges eight zones (keys) in alphabetical order around the edge of a smartwatch. The zones are automatically highlighted one by one in clockwise direction. When the zone with the desired letter is highlighted, users press the crown to select it, and repeat the process for the other letters of the target word. A statistical decoder displays the most probable word as auto-complete suggestion directly in the input area in grayed-out text. Users can accept the suggestion by swiping left-to-right anywhere on the screen. The suggestion bar displays the remaining ten most probable words. To select a word from the suggestion bar, users tap anywhere on the screen to start highlighting the predicted words in sequential order. Pressing the crown again enters the highlighted word and exits the suggestion bar. This figure depicts the process of entering the word “priority”.

1 Introduction

Mobile text entry has come a long way since the introduction of short messaging service (SMS) in the late 1980s [85]. The ubiquity of touchscreens and development of virtual keyboards coupled with effective predictive systems have made text entry on mobile devices substantially faster and easier. Nowadays, mobile text entry is used not only to keep in touch with friends and family but also for access to various mobile services and to get work done. The ability to enter text on the go thus can foster productivity, social and networking abilities, independent living, and economic and social self-sufficiency. However, mobile text entry remains a challenge to people with limited fine motor skills or dexterity. Fine motor skills or dexterity is the ability to make movements using the small muscles in the hands and wrists, which involves the coordination of the muscles, the eyes, and the brain [93]. Dexterity can be affected due to various motor impairments caused by injuries (e.g., spinal cord injury), congenital and age-related conditions like cerebral palsy, muscular dystrophy, multiple sclerosis, spina bifida, ALS or Lou Gehrig’s disease, arthritis, Parkinson’s disease, and essential tremor.
People with limited dexterity face various challenges in entering text with touchscreen-based mobile devices primarily due to the absence of tactile feedback and physical stability [12, 27, 40, 64, 65, 67]. With physical interfaces, users can anchor their fingers on a physical key, and the need for pressing down the key for an input reduces the chances of accidental input. The absence of these feedback makes precise target selection difficult. As a result, feature phones1 remain a popular choice amongst the population [63]. However, features phones are not always easily accessible to the population. Many people with motor impairments find holding the device with one hand or getting the device out of pocket or purse physically challenging [66]. Some tend to address this issue by using a phone lanyard around the neck, but find it uncomfortable or fear standing out from the crowd [15]. Speech is a promising solution [39, 54], however, research revealed that users are usually reluctant to use the method in public places due to privacy and security concerns [19, 20, 21, 70, 73] and because the accuracy of the method is heavily affected by ambient noise [51, 66]. Speech is also not accessible to those who have motor impairments due to dysarthria or have a speech impairment. While discussing these issues with people with motor impairments in another research, a contemporary idea was proposed by the participants. Many of them suggested that a text entry technique for smartwatches that is effective in composing short messages can address many of these challenges. Smartwatches are worn on the wrist, thus users have easy access to those. Because it is a commonly used device, people with disabilities will fit in the crowd by not standing out. These findings were further confirmed in a focus group study reported here. This inspired us to design an effective and accessible text entry technique for smartwatches for the population.
Designing an accessible text entry technique for smartwatches is arguably a more challenging feat than that for smartphones due to the smaller screen space and limited processing power. It is clear that a stable means of interaction is needed rather than gestures, touch, or multi-touch that can be physically taxing and require high precision. Yet, most existing text entry techniques for smartwatches rely on precise target selection [4, 13, 29, 37, 69, 90] or performing finger gestures on small interactive spaces [14, 29, 78], which are not compliant with the guidelines for accessible smartwatch interactions [27, 60]. Researchers recommended exploiting various physical attributes of mobile/wearable devices (e.g., the bezel, physical buttons, etc.) as much as possible to provide access to the population [60, 77].
Crownboard is a novel scanning keyboard that enables text entry on smartwatches by rotating and pressing the crown located on the side of the device. The physical dial provides users with the much needed tactile feedback and enables them to anchor and rest their finger on it. The need for pressing the crown to confirm input reduces the possibility of accidental input. Crownboard is mostly a single-key input method, with the exception of occasional taps and swipes on the display, which can be performed anywhere on the screen, thus does not require precise selection and facilitate accessibility [27, 88]. Finally, the layout is arranged around the bezel (Fig. 1), which does not occupy much of the screen space and reduces interface clutter, which can affect input performance [44, 78]. Fig. 2 illustrates the anatomy of a common round-shaped smartwatch.
The remainder of the paper is organized as follows. First, we review the existing works in the area. We then present the findings of a focus group that motivate and guide the work. Then, we discuss the optimization process of the keyboard design and present the final prototypes. We evaluate the prototypes in multiple user studies and discuss the findings. Finally, we conclude the paper with potential future extensions of the work.
Figure 2:
Figure 2: Anatomy of a round-shaped smartwatch.

2 Related Work

Numerous works have studied and proposed approaches for improving access to larger touchscreen-based devices such as smartphones and tablet computers for people with motor impairments [3, 12, 16, 30, 31, 40, 62, 64, 65, 66, 88]. Accessibility of smartwatches, however, has not been well investigated in the literature. Malu et al. [60] evaluated different interaction approaches on smartwatches for people with upper body motor impairments. They found out that many participants are unable to complete button, swipe and tap interactions. In a follow up study, Malu et al. [61] compared touchscreen input to input on the bezel of a watch, where touchscreen input was faster but the bezel was more accurate.

2.1 Text Entry on Smartwatches

Most keyboards for smartwatches resize the standard Qwerty layout to fit smaller screens. To facilitate the selection of smaller targets, i.e., virtual keys, these methods require users to either zoom-in on a specific area, swipe between different areas, or drag the keyboard to focus on a specific area [13, 37, 49, 69, 80], thus require multiple actions to enter one letter. These methods occupy much of the screen real-estate and are slow by design due to their multi-step disambiguation process. Some miniature Qwerty keyboards do not require multiple actions, instead rely on aggressive auto-correction models to correct any potential errors [29, 90, 95]. This, however, makes the entry of out-of-vocabulary words difficult, often impossible. Alternative approaches reduce the total number of keys in the layout by grouping multiple letters onto one key then disambiguating the input using sophisticated language and probabilistic models [38, 76, 91]. These methods also occupy much of the screen space and do not always support out-of-vocabulary words. Some keyboards use circular layouts that arrange the letters around the bezel of a smartwatch alphabetically [28, 33, 75], based on the Qwerty layout [14, 78], or through an optimization process [87]. These techniques free up screen real-estate but use interaction approaches that are difficult for people with limited motor skills to perform. Particularly, these methods enable text entry by either repeatedly tapping on the keys [14, 87], connecting the keys by swiping on the screen [78], performing wrist gestures [28], or rotating the watch’s bezel [96]. Some techniques assign different letters or keys to different fingers then differentiate between the fingers using external hardware and sensors [32, 38]. These methods are impractical due to use of extramural devices. Arif and Mazalek [4] provide a comprehensive review of existing text entry techniques for smartwatches. Almost all of these techniques are designed for people with fine motor skills, thus use interaction approaches that are challenging, if not impossible, for people with limited dexterity to perform [60], such as tapping on tiny keys, performing gestures connecting tiny keys around the screen, performing wrist gestures, tapping with different fingers, and rotating the bezel, which also requires the use of multiple fingers. Besides, none of these methods were evaluated with people with motor impairments.

2.2 Text Entry Techniques for People with Motor Impairments

Siean and Vatavu [82] conducted a literature review of wearable interactions for users with motor impairments, which identified a limited research on accessible wearable interactions and low numbers of participants with motor impairments involved in user studies about wearable interactions. Especially in the area of smartwatch text entry, not much work focused on providing access to those with motor impairments. We, therefore, review text entry techniques aimed at other devices, particularly desktop platforms, which in theory could be adapted to smartwatches. These techniques can be categorized into ambiguous and scanning keyboards, discussed below. Table 1 presents performance of popular text entry techniques for people with motor impairments from the literature. Relevantly, a survey revealed that median text entry rate for people with physical disabilities is 5.6 words per minute on desktop platforms [47].
Table 1:
MethodSpeedSampleDevice
HandiGlyph [9]2.4 wpm1Pocket PC phone
LURD-Writer [25]1–2 wpm1Computer
3DScan [26]1–1.5 wpm1Computer
CHANTI [84]1–4 wpm5Computer
Humsher [71]3–4 wpm4Computer
Table 1: Entry speed (words per minute) of popular text entry techniques for people with motor impairments.

2.2.1 Ambiguous Keyboards.

These keyboards assign multiple letters per key, thus require a user-level or a software-level disambiguation process. These keyboards reduce the total number of keys, hence can display larger keys in a smaller screen space, which facilitates accessibility. However, a manual disambiguation process increases the total number of actions needed to enter a letter, which affects text entry speed. A software-level disambiguation process, on the other hand, reduces the total number of actions but makes out-of-vocabulary word entry difficult. Tanaka-Ishii et al. [86] developed a keyboard where the letters of the English alphabet were grouped into four keys in alphabetical order. It disambiguates the input using a simple language model. Harbusch and Kühn [34] developed a similar keyboard but grouped the letters into three keys based on letter frequencies. Dasher [92] enables text entry by using pointing devices. It arranges all letters of the English language alphabetically on one side of the display. To enter text, users move the cursor toward the desired letter, the system then predicts and displays the next most probable letters and words closer to the cursor for easier selection. This process continues until the complete phrase is entered. The method can be used with a range of pointing devices, including a mouse, touchscreen, or cursor positioning with eye tracking and head movements. However, neither of these keyboards have been evaluated with motor impaired people.

2.2.2 Scanning or Single-Switch Keyboards.

These keyboards automatically highlight the keys in a predetermined sequence, when the desired key is highlighted, users select it by performing an action, such as pressing a physical key or performing gestures, which are often referred to as a “switch”. Because these techniques enable text entry by using a single switch, they are the most common for people with motor impairments [72]. These keyboards can be categorized into two groups based on their scanning mechanisms.
Some keyboards use a uni-level scanning mechanism that requires highlighting only one level of keys. MacKenzie [56], Mackenzie and Felzer [58], for example, developed the Scanning Ambiguous Keyboard (SAK), a four-key alphabetical layout that groups the English alphabet into three keys and includes a dedicated key for the Space character. The keys are highlighted in sequence. Users select the keys that contain the letters of the target word by pressing any key on the system keyboard, then select the Space key to see the most probable words. They, however, did not evaluate the keyboard with motor impaired people. In a follow-up work, Felzer et al. [23] replaced the switch of SAK (physical keypress) with intentional muscle contractions to further reduce the required physical effort. In an evaluation, one participant with Friedreich’s Ataxia reached an average entry rate of 2.8 wpm with the technique. Ashtiani and MacKenzie [7], in contrast, used eye blink as the switch. In an evaluation, able-bodied participants reached on average 5.3 wpm with the technique. Belatar and Poirier [9] decomposed the English alphabet into basic shapes, then grouped them into four keys. The keys are highlighted in sequence and selected by using a push button. In an evaluation, one participant with Locked-in Syndrome yielded on average 2.4 wpm entry speed with the technique.
Some keyboards also use a multi-level scanning mechanism that usually requires scanning two (seldom more) levels of keys. Users first select the first group of characters, which starts scanning the characters in the group, users then select the intended character for input. Lin et al. [52, 53], for example, developed a two-level scanning keyboard arranged in a 3 × 3 grid, each grid containing nine letters, symbols, or modifier and navigation keys. With this technique, users first select a grid, then a character or action in the grid. This technique was not evaluated in a user study. Felzer and Rinderknecht [26] developed a similar 4 × 4 grid layout, where each grid contains another 4 × 4 grid. In a study, one participant with Friedreich’s Ataxia yielded on average 1.4 wpm entry speed with the technique. Prabhu and Prasad [74] developed a circular keyboard that arranges eight smaller circles, each containing 1–7 letters, symbols, or modifier and navigation keys, around the circumference of a bigger circle. With this technique, users first select a smaller circle, which replaces the content of smaller circles with the content of the selected circle. Users then select the smaller circle containing the target character. This technique was not evaluated with motor impaired people. There are also some dynamic layouts that change letter arrangements based on the previous input. We do not review those here since research has found these to be unusable by people with motor impairments [72]. The technique proposed here uses a hybrid scanning mechanism, where most words are entered using uni-level scanning, but lets users to switch to two-level scanning when the target word is in the suggestion bar.
Table 2:
Participant
ID
GenderAgeHighest
Degree
English
Proficiency
Level of
Impairment
Cause
P1Woman31UniversityNativeModerateCerebral palsy
P2Man48UniversityNativeSevereDelayed development
P3Woman37UniversityNativeSevereMultiple sclerosis
Table 2: Demographics of the focus group participants.

3 Focus Group: Smartwatch Text Entry

We conducted a focus group to find out whether there is a need for text entry on smartwatches in the motor impaired community. The focus group discussed the challenges they face in entering text on mobile devices, the possibility of using smartwatch as a conduit to enter text, and potential design choices for a text entry technique for smartwatches to inform this research, further discussed in Section 3.2.

3.1 Participants

Three motor impaired people participated in the focus group. They were recruited through local accessibility and independent living Centers. The Centers distributed a call for participation via their emailing lists and shared flyers at social events. Those who were interested in participating in the study contacted us via email or phone. Table 2 presents their demographic information. They all used accessibility tools to operate mobile devices. One of them (P1) owned a smartwatch, which she used mainly to tell time and to receive mobile notifications. They all received U.S. $20 Amazon gift cards for volunteering.
Figure 3:
Figure 3: The four existing smartwatch techniques demonstrated in the focus group: (a) Wear OS Qwerty [10], (b) WatchWriter [29], (c) SwipeRing [78], and (d) COMPASS [96].

3.2 Procedure

Due to the spread of COVID-19 virus, the focus group was conducted via a teleconference application. We shared the digital informed consent form and the demographic questionnaire with potential volunteers ahead of time for them to learn about the research. Participants completed and signed all forms electronically. Initially, six participants signed the informed consent form, but three were unable to attend the focus group due to technical challenges and health related issues. In the focus group, participants were addressed by pseudonyms (e.g., Mr. A) to maintain anonymity.
First, we re-explained the study procedure and answered all questions participants had. Then, we started the focus group with general questions about mobile text entry, such as, “do you or would you enter text on mobile devices”, “do you face any challenges in entering text on mobile devices”, and “could you describe these challenges” and whether they would be interested in a technique that enables text entry on smartwatches. Participants took turns in responding to the questions. We also enabled them to discuss their responses amongst themselves. However, we moderated the discussion to make sure that participants did not deviate from the topic.
We then demonstrated four existing text entry techniques for smartwatches: 1) the default Wear OS Qwerty [10] that requires users to tap on individual keys, 2) WatchWriter [29] that uses a similar layout but enables users to gesture type by connecting the keys with the index finger, 3) SwipeRing [78] that uses a circular Qwerty and enables both tapping and gesture typing, and 4) COMPASS [96] that enables entering text by rotating a watch’s bezel (Fig. 3). None of the participants had seen these methods before the study. Since we did not have access to the academic solutions [29, 78, 96], the demonstration occurred by showing video clips collected from the ACM Digital Library and YouTube. After each demonstration, we asked participants to mimic the actions needed to use the corresponding method on their wristwatches. Participants could ask to re-watch a video if they were unsure about its mechanism. We supervised this process to make sure that accurate actions were being performed.
Finally, we demonstrated several custom circular alphabetical layouts using the same process. We then asked questions about their most preferred keyboard layout(s) for smartwatches (circular vs. block, alphabetical vs. Qwerty-based), scanning direction (clockwise vs. counterclockwise vs. shortest-path), scanning interval (500–2,000 ms), and switch (tap vs. rotating the bezel vs. rotating the crown vs. pressing the crown). The complete session was video-recorded for qualitative analysis. This protocol was reviewed and approved by the Institutional Review Board (IRB).

3.3 Results & Discussion

We transcribed the complete focus group, then coded the data using a bottom-up approach for thematic analysis. The coding process used an iterative process to identify common themes in participant responses.

3.3.1 Mobile Text Entry Challenges.

We initiated the discussion by asking participants to share their smartphone usage behaviors. We then asked them to discuss the challenges they face in entering text on mobile devices. In response, participants shared many difficulties that correspond to the findings reported in the literature [3, 12, 16, 30, 31, 40, 62, 64, 65, 66, 88]. They all found precise target selection difficult due to the absence of tactile feedback, physical references, and various physical challenges (N = 3). Using feature phones was deemed as much easier by all because of the physical keys, but participants tend not to use these as they are not ideal for consuming multimedia, like reading newspaper articles or watching videos (N = 3). Interestingly, one challenge they all articulated about was accessing the device itself (N = 3). They all expressed physical difficulties in getting the device out of pocket, purse, bag, or even finding the device at home, which often discouraged them to check their mobile device or use it for composing text. To address this, some wear the device around the neck with lanyards (N = 2), however, find it uncomfortable, heavy, and awkward in social settings. Some also found lifting the phone on lanyard difficult (N =2). For example, P1 stated, “In a public place, it’s hard for me to answer my regular phone because it’s on a lanyard”. None of them were frequent users of voice assistants (N =3), especially in public places, due to their unreliability in noisy places, and security and privacy concerns. P1 and P3 identified the screen reader features of smartphones as unusable because they either require precise target selection or using finger gestures like rotating or pinching, which they cannot perform. Hence, all participants relied on desktop computers for all text entry episodes, when possible (N = 3). P3 commented, “I tend to use a computer [...] because computer is less responsive [causes fewer accidental input]”.

3.3.2 Text Entry on Smartwatches.

Participants were very receptive to text entry on smartwatches providing that the technique is usable and effective (N = 3). They felt that such a technique could resolve the access to device challenges since once worn, it remains on the wrist (N = 3). P1, who owned a smartwatch, could not use its default keyboard due to physical challenges. She also expressed her dissatisfaction with the device’s speech-to-text feature. P2 and P3 explored different smartwatches at stores but decided against purchasing one as they found them inaccessible.

3.3.3 Keyboard Shape and Layout.

All participants (N = 3) preferred circular keyboards over square/rectangular-shaped keyboards since they found the former to be minimalistic, less cluttered, and in general, more appropriate for smartwatches. They also preferred arranging the letters in alphabetical order assuming that it would be much easier for them to learn and use (N = 3). P2 commented, “I think the Qwertykeyboard’s [rectangular] shape fits with the idea of computer keyboard [...], but we’re talking about a watch and I think the idea of the letters going around [in a circle] alphabetically might be more intuitive”. P3 argued that locating the letters will be much easier on an alphabetical layout, while a Qwerty-based keyboard would have a steeper learning curve. They all agreed that starting the letters from the top of the layout (Fig. 1) is more intuitive (N = 3).

3.3.4 Scanning Group, Direction, and Interval.

All participants (N = 3) were in favor of grouping the letters into zones to reduce the total number of keys. They identified that scanning through a smaller number of zones would increase the keyboard’s usability and make it faster. P2 commented, “I think grouping letters would actually make the process faster”. They also felt that its resemblance to T9 [17] would make it easier for them to learn since they had used it on feature phones. In terms of scanning direction, all participants (N =3) found the counterclockwise scanning unnatural. They were, however, undecided between the clockwise and the shortest-path scanning. The shortest-path uses an adaptive scanning direction that highlights the zones in the direction closest to the most probable letters. For example, when clockwise scanning requires going through more zones to reach the most probable letters than counterclockwise scanning, the keyboard switches to counterclockwise for the next input and vice versa. Participants found the idea of shortest-path scanning intriguing but were unsure whether it would work in actual text entry episodes, thus wanted to test it before making up their minds (N = 2). P2 stated, “I like the idea of the most probable letter, or I could hate it. [It] could end up being a disaster if your prediction of what the most likely letter is wrong”. P3 felt that clockwise scanning is the most natural because “... you know that there actually is going to be always the same, and we can anticipate [the direction]”. Due to the absence of a consensus, we further investigate this in Section 7. In the focus group, we demonstrated scanning intervals from 500 to 2,000 ms, of which participants picked 1,000 ms to be the most appropriate (N = 3). Yet, they wanted this value to be adjustable since users with different disabilities may prefer different intervals. They also noted that they could eventually prefer scanning intervals as low as 500 ms as they become more experienced with the keyboard (N = 2).

3.3.5 Manual and Automated Scanning.

Most participants (N = 2) preferred automated scanning to reduce physical efforts. P1 and P3 stressed that it is difficult for them to continuously rotate the crown of the watch. P1 commented, “I can’t really do the rotational aspect but I could press... I like [automated scanning] better, for the simple fact that you don’t have to rotate it”. P2, however, believed that manual scanning by rotating the crown is a possibility (N = 1). We compare these two approaches in Section 5.

3.3.6 Switch Preference.

Participants were unable to perform complex multi-finger gestures like pinching (N = 3). They were, however, able to perform taps and directional gestures (i.e., swipes) with one finger (N = 3), providing that they could be performed anywhere on the screen (i.e., does not require selecting targets) and the gestures do not have to be of a specific size. However, all participants preferred using the crown the most as they could anchor their finger on it, and when pressed, feel a push-back sensation on the finger (N = 3). P3 mentioned that the knob affords a more stable means of interaction, “I like the physical button [the crown] because if I’m having shaky hands that day, I don’t have to try to force my finger to stay on one specific spot versus pressing a button with thumb”. Participants also found it to be more user-friendly since touching the crown does not occlude the display (N = 3). Besides, two of them pointed out that the crown could be pressed with the knuckles and finger joints, which makes it even more usable (a behavior we observed in the final study, see Fig. 14). Participants found the two-finger action needed to rotate the bezel physically demanding. P1 commented, “ I couldn’t do the bezel, I could use only buttons [the crown].” P3 reasoned that although the bezel is larger than the crown, interacting with the latter is straightforward and more natural since it could be pressed or even rotated with one finger. She commented “I mean, the crown seems the simplest since I don’t have to like get my whole hand involved... [Rotating the bezel] requires too much coordination!” Based on these findings, we used crown press as the switch of our scanning keyboard. The keyboard also uses location- and size-agnostic taps and directional gestures (i.e., swipes) on the display to afford additional features.
Table 3:
MethodLetters per KeyEntry Speed
Yi et al. [96]39–13 wpm
Jiang and Weng [42]4–59.6–11 wpm
Gong et al. [28]4–510 wpm
Dunlop et al. [18], Komninos and Dunlop [48]3–68 wpm
Table 3: Average text entry speed of ambiguous smartwatch keyboards that assign multiple letters to each key.

4 Crownboard: Design and Optimization

The design goal of Crownboard was to strike the right balance between usability, learnability, and performance in terms of text entry speed and accuracy. Assigning all letters to one key makes interaction fast and easy but hurts the disambiguation ability of predictive models [56]. On the other hand, assigning dedicated keys to each letter eliminates the need for disambiguation but increases the scanning time and affects speed and accuracy. Therefore, we adapted a systematic approach to decide the total number of keys, the number of letters per key, which letters to group into each key, and the most effective scanning interval.

4.1 Key Design

The total number of keys in the layout was decided based on the optimal number(s) of letters per key, for which, we conducted a literature review of ambiguous smartwatch keyboards that use linguistic models to disambiguate the input. We observed a correlation between the number of letters per key and entry speed. There seems to be a somewhat inverse relationship between them—the fewer the number of letters per key, the better the speed (Table 3). Considering this, and the finding that fewer number of keys makes scanning keyboards more accessible to people with motor impairments [50], we explored 3–6 letters per key in the following steps.

4.2 Layout Optimization

We explored all possible layouts with 3 to 6 letters per key for the circular string {abcdefghijklmnopqrstuvwxyz} to find the least ambiguous alphabetical layout. One way to approach the problem is to redefine it as a search for a layout that has the minimal number of clashing bigrams for any given key within it, in other words, reduce frequent letter pairs in the keys. For example, “th” is the most frequent bigram, thus ‘t’ and ‘h’ must be on different keys. We prioritized this because participants of the focus group had difficulties in pressing the crown repeatedly in short intervals. A prior work [94] also reported similar phenomenon causing “long keypress errors” due to pressing the key longer than the active key repeat delay. Reducing common letter pairs on the same key forces users to wait for the next highlighted zone before the next keypress, which reduces the possibility of such errors. We did not focus on minimizing word pairs that have the same zones in their sequences except for one, however, addressed this in the disambiguation process described in Section 4.4.
Formally, let us denote the set of possible keys as L, set of all bigrams as B, first and second letter within a particular bigram as b1 and b2, respectively, and the frequency of a bigram b1b2 as f(b1b2). Then, the optimization goal corresponds to the following problem:
\begin{equation} \min _{l \in L} \sum _{b_1b_2 \in B} f(b_1b_2)\, I(\text{key}(b_1, l) = \text{key}(b_2, l)), \end{equation}
(1)
where I is the indicator function with a value of 1 if the evaluated condition is true, and key(x, l) is the key to which letter x is assigned within the layout l.
This minimization problem can be solved by a simple enumeration (brute-force search) over all configurations of the problem, with the total runtime of O(|L||B|). To generate the set of possible layouts L, we split the circular string into key sizes ranging from 3 to 6 letters, which gave us a total of 27,560 different layouts. The set of bigrams B contains all 676 possible bigrams of the English language [68], we used bigrams.json file [55] as a reference to our frequencies f(b1b2). To prevent numeric overflow, we scaled down all frequencies by multiplying them by 1.0 × 10− 11, and report the scaled-down frequencies as the final scores. The layout with minimal ambiguity score (Eq. 1) was: {yza} {bcd} {efg} {hij} {klmn} {opq} {rst} {uvwx} with the value of 1.53, which we use in Crownboard (Fig. 1). This resulted in 48 mm2 (three letters) to 64 mm2 (four letters) zones in Crownboard. For reference, the layout with the worst score was: {xyzab} {uvw} {opqrst} {ijklmn} {cdefgh} with the value of 6.19.
Table 4:
MethodParticipantScanning IntervalEntry Speed
Ashtiani and MacKenzie [7]NR700, 850, 1,000 ms4.3, 5.3, 4.6 wpm
MacKenzie [56]NR1,100–700 ms4–5 wpm
Felzer et al. [23]NR1,000–500 ms2–7 wpm
Belatar and Poirier [9]R (Expert)754–118 ms2–3 wpm
Baljko and Tam [8]NR750, 1,250 ms2.6, 1.8 wpm
Prabhu and Prasad [74]NR2,100, 1,800, 1,500, 1,200 msNA
Table 4: Scanning intervals of scanning keyboards for people with motor impairments and the reported text entry speed. “R” signifies studies conducted with representative users, while “NR” represent non-representative (non-disabled) users.

4.3 Scanning Interval

A scanning interval is commonly used in text entry techniques for people with motor impairments [52, 53, 58] to automatically go over the keys by highlighting them until the desired key is selected by the user. The maximum possible entry speed of a scanning keyboard is reliant on the pace of its scanning interval. Slower scanning intervals can make users impatient and affect entry speed [74], while faster scanning intervals can cause too many errors, and even prevent users from using the technique [23]. To determine the most effective scanning interval, we conducted a literature review of scanning keyboards aimed at people with motor impairments. Table 4 presents a subset of these keyboards, only one of which was evaluated with the representative population. Other keyboards that were evaluated with motor impaired people [9, 22, 24, 26, 52, 71, 72] either used a different scanning mechanism or did not report scanning intervals. The review revealed that scanning speed varies from 700 ms (fastest) to 2,100 ms (slowest). Based on this and the findings of the focus group (Section 3), we decided to use a scanning interval of 1,000 ms. However, the system allows users to adjust the interval as needed.

4.4 The Disambiguation Process

Crownboard disambiguates the input (i.e., sequences of keys) into words. When there are multiple possible words for a sequence of keys, it automatically selects the most probable one and enables users to pick a different possible word from a suggestion bar. Users switch to the suggestion bar by tapping anywhere on the screen. Formally, given a key sequence s, Crownboard predicts the most probable word w from a vocabulary of N words w1, …, wN using the following equation:
\begin{equation} w = \mathop{arg\,max}_{w_i \in {w_1, \dots, w_N}} P(w_i | s), \end{equation}
(2)
where P(wi|s) is the conditional probability of getting word wi given a sequence s. To compute these probabilities, we apply the Bayes’ rule, then replace probabilities by counts of the occurrences of the words/sequences in the training corpus:
\begin{equation} P(w_i | s) = \frac{P(w_i, s)}{P(s)} = \frac{\text{count}(\text{prefix}(w_i) = s){\bf } }{\text{count}(s)}. \end{equation}
(3)
To efficiently compute and store a conditional probability table of P(wi|s) for all possible words wi and sequences s, we use a binary prefix tree, also known as the Trie data structure. Once the tree is constructed, we trim it to contain at most K = 10 most probable words for each sequence s and save it to run on the smartwatch. It is a unigram word model, thus does not account for previously typed words. To address this, we accompany the model with a bigram model that predicts the most probable word wi, given a sequence of the keys s and the previously typed word wk. This requires computing probabilities P(wi|s, wk) by a straightforward extension of Eq. (3). Despite extending the model for bigram probabilities, it remains simplistic in nature, which is necessary to account for the limited processing power of smartwatches. Strengthening the decoder by conditioning on more than one previously typed words can potentially improve the performance of the keyboard. Employing recent machine learning models that can natively handle sequence-level information (such as, LSTMs and Transformers) and training them on a substantial amount of data could potentially further improve the disambiguation process. However, designing and training such models is challenging and outside the scope of this work.

4.5 Error Correction

To reduce user actions, Crownboard appends a space when a word is selected automatically by the keyboard or manually from the suggestion bar by pressing the crown. To confirm the selection of the most probable word, which appears in the transcribed text area in greyed out font as auto-completion suggestion, users swipe from left to right anywhere on the screen. We used this gesture since the mapping was identified to be very intuitive in a prior work [5], presumably because the movement of the cursor corresponds to the direction of the swipe. To delete the last entered word, users long-tap anywhere on the screen for 500 ms, which is comparable to existing scanning keyboards for touchscreens [56].

4.6 Out-of-Vocabulary Words and Special Characters

Crownboard uses a multi-tap [41] inspired approach to enable the entry of out-of-vocabulary (OOV) words. When the zone containing the target letter is highlighted, users double-press the crown. A double-press is detected when the next press is performed within 1,000 ms of the previous press. This duration was used to reduce unwanted selection of the next zone since the zones are scanned in every 1,000 ms. Upon double-press, each letter of the zone are displayed on the suggestion bar, highlighted one-by-one. Users then press the crown to enter the highlighted letter (see Fig. 4). We used this approach since many users are already familiar with multi-tap, which is likely to make learning and using the method easier due to knowledge and skill transfer. However, we did not evaluate this in user studies.
Figure 4:
Figure 4: The process of entering one letter at a time for out-of-vocabulary words. First, the user double-presses the crown to see each letter of the zone on the suggestion bar, highlighted one-by-one. The user then presses the crown to enter the highlighted letter ‘s’.
The current prototype of Crownboard is optimized for the English language and does not enable the entry of uppercase letters, numbers, and special symbols. However, these features could be easily added by enabling users to switch between layouts for different languages, digits, and symbols by swiping right to left on the screen. Note that it is a common practice to evaluate novel text entry techniques without enabling numeric and special character entry to eliminate a potential confound [59].

4.7 Crown vs. Touch Interactions

Crownboard is designed for motor impaired people who can either lift up or rest the watch hand on a surface (e.g., armrest of a wheelchair) for interaction with the other hand. We maximized interactions with the crown based on the findings of the focus group and prior research that showed that people with motor impairments prefer and perform much better with physical buttons than precise target selection with touch [27]. Studies also showed that motor impaired people report higher levels of discomfort and commit substantially more errors with touch than physical alternatives [40, 81]. In addition to crown press, Crownboard uses double-press in special cases, particularly to switch to character entry mode for out-of-vocabulary words. We used double-press instead of double-tap on the display since prior investigations found the latter to be physically challenging and time-consuming to perform by people with motor impairments [45, 89]. Unlike double-tap, users do not necessarily have to lift the finger off the crown for double-press, rather can continue using it as an anchor for the second press, presumably, increasing usability.
Crownboard uses taps and gestures on the screen that do not require precise selection, thus is easier for motor impaired people to use [89]. Trewin et al. [88] reported such action is “easy for everyone [with motor impairments] to perform[, as it] does not require fine positioning.” Relevantly, prior studies identified 12 mm as an ideal target size for people with motor impairments for improved speed and accuracy [30, 67]. Another work [27] recommended using 18 mm targets. With Crownboard, users have the complete 930 mm2 display to perform the taps, much larger than the minimum size recommended in the literature.
We conducted a simulation study to investigate crown and touch-based interaction distribution in common text entry tasks with Crownboard. It predicted the total number and types of actions needed to enter the 500 short English phrases from the MacKenzie & Soukoreff set [59]. This set is commonly used in text entry research since it includes phases that are moderate in length (M = 28.61 characters) and highly correlate with character frequency in the English language. The phrases do not contain any numeric or special characters. There are some uppercase characters, which were converted to lowercase in investigations. For simplicity, the simulation assumed that no errors were committed in the process of text entry and users selected a suggestion when it matched the target word. The simulation revealed that to enter the phrases with the manual Crownboard (described in Section 5.1), in total 31,713 actions are needed, of which 89% are crown-based. Likewise, the automated version requires 13,913 actions in total, of which 74% are crown-based. Fig. 5 illustrates these findings. This demonstrates Crownboard’s reliance on physical crown-based interactions to improve accessibility.
Figure 5:
Figure 5: Distribution of crown and touch-based interactions in common text entry tasks with manual and automated Crownboards. The automated version does not require rotating the crown since the method automatically scans the zones.

5 User Study 1: Manual Vs. Automated Scanning

In this study, we compared manual and automated scanning in a between-subjects design. The purpose was to identify the most effective rotation mechanism for the keyboard in terms of speed and accuracy. The study was conducted with non-disabled people since identifying, recruiting, and conducting in-person studies with motor impaired people was extremely difficult during the Coronavirus pandemic. Since the study focuses only on performance difference between rotation directions, we speculate the results may be generalizable to the target population, and can shed light onto whether non-disabled people can use Crownboard in situational impairments [79].
Figure 6:
Figure 6: The process of entering the word “drink” with manual Crownboard. The user rotates the crown upward to scan the zones in clockwise direction. When the key with the letter ‘d’ is highlighted, she presses the crown to select it. The user then rotates the crown downward to scan the zones in counterclockwise direction, selects the zone containing the letter ‘r’, then continues scanning in the same direction to select the zone containing ‘i’. The user notices the target word in the suggestion bar. She taps anywhere on the screen to start scanning the suggested words by rotating the crown, then selects the intended word by pressing the crown.

5.1 Manual Crownboard

For this study, we developed a manual version of Crownboard that requires users to rotate the crown to scan the zones for selection. Rotating the crown upward scans the zones in clockwise direction and rotating the crown downward scans the zones in counterclockwise direction. All other interactions are identical to Crownboard. Fig. 6 demonstrates the process of entering words with manual Crownboard.

5.2 Participants

Sixteen participants voluntarily took part in the study. They were recruited by distributing a call for participation through the local university emailing lists and regional social media channels. Those who were interested in participating contacted us via email, phone, or private messages on our social media profiles. We divided participants into two groups: manual and automated. An attempt was made assure that the groups are somewhat comparable in terms of age, gender, and mobile device experience. Table 5 presents demographic information of these groups. None of them reported having a condition limiting their fine motor skills. Each participant received U.S. $15 for volunteering.
Table 5:
 ManualAutomated
AgeM = 27.5 years (SD = 3.7)M = 28.9 years (SD = 2.7)
Gender1 female, 7 male3 female, 5 male
Handedness7 right, 1 left7 right, 1 ambidextrous
Experience with mobile devicesM = 10.3 years (SD = 1.2)M = 10.3 years (SD = 3.1)
Experience with smartwatchesM = 1.30 years (SD = 1.7)M = 0.8 years (SD = 1.1)
Table 5: Demographics of the two user groups in the first user study.

5.3 Apparatus

We used an LG Watch Style smartwatch, 42.3 × 45.7 × 10.8 mm, 9.3 cm2 circular display, 46 grams, running on the Wear OS at 360 × 360 pixels in the study. We decided to use a circular watch in the study since it is the most popular shape for (smart)watches [43, 46]. We developed both versions of Crownboard with Android Studio 4.0, SDK 26. Both versions calculated all performance metrics directly and logged all interactions with timestamps.

5.4 Design

The study used a mixed-design with one between-subjects independent variable: method (two conditions: manual, automated) and one within-subjects independent variable: block (five blocks). We decided to use a mixed-design to avoid interference between the conditions. Since both methods use the same layout, the skills acquired in one condition could have affected the performance of the other condition in a within-subjects design [57]. We divided the participants into two separate groups: manual and automated, with eight participants each. The groups used the technique assigned to them to enter short English phrases from the MacKenzie and Soukoreff [59] set in five blocks. Each block contained eight random unique phrases from the set. In summary, the design was: 2 groups (manual, automated) × 8 participants × 5 blocks × 8 random phrases = 640 phrases in total. The dependent variables were the following commonly used performance metrics:
Words per minute (wpm) signifies the total number of words entered in one minute, where a “word” is defined as five characters including letters, spaces, and other printable characters [6].
Error rate (%) is the average percentage of erroneous characters remaining and correct characters missing in the final transcribed text. In other words, it is the ratio of the total number of incorrect and missing characters in the transcribed text to the length of the transcribed text.
Figure 7:
Figure 7: Participants entering text with the manual (left) and the automated Crownboard (right) in the first user study.

5.5 Procedure

We conducted a study with only one participant at a time in the lab. First, we explained the procedure and answered to all questions participants had, after that, we collected the consent forms. We then asked them to answer questions about demographic and mobile usage experience. After that, we introduced the keyboard assigned to them, instructed them to sit in front of a desk, and wear the smartwatch on the hand they prefer. However, to increase the external validity of the study, we enforced the use of only one finger for interaction with the device. All participants wore the smartwatch on their left hand, rested the arm on the table, and performed the actions using the index finger (Fig. 7).
We asked participants to practice with the keyboard assigned to them by transcribing 2–3 phrases from the MacKenzie and Soukoreff [59] set. These phrases were not repeated in the study. The main study started after this short practice session. There were five blocks per condition, with eight random unique phrases per block. We enforced a 2–3 minutes gap between the blocks to reduce any potential effects of fatigue. During the study for both methods, the phrases were presented one by one at the top part of the smartwatch (Fig. 7). Participants were asked to read a presented phrase carefully, transcribe it “as fast and accurate as possible”, then swipe from top to bottom anywhere on the screen to see the next phrase. The transcribed phrase was displayed at the bottom part of the smartwatch screen. Error correction was recommended but not forced. Upon completion, participants answered to a short post-study questionnaire about the assigned method’s speed, accuracy, learnability, ease-of-use, and their willingness-to-use the method on smartwatches on a 5-point Likert scale. They also took part in a debrief session where they were asked about any potential strategies used to enhance the performance of the method assigned to them.
All researchers involved in this study were fully vaccinated for COVID-19. Participants were pre-screened for COVID-19 symptoms during the recruitment process by a researcher and on the day of the experiment by the host institute. Both researchers and participants wore face coverings and sanitized their hands before study sessions. They maintained a 3 distance from each other at all times. All study devices and surfaces were disinfected before and after each study session. This protocol was reviewed and approved by the Institutional Review Board (IRB).
Figure 8:
Figure 8: (a) Average entry speed (wpm) and (b) error rate (%) per block, with both manual and automated Crownboard, fitted to power trendlines.

5.6 Results

A complete study session took about 60 minutes to complete, including demonstration, questionnaires, and breaks. A Shapiro-Wilk test revealed that the response variable residuals were normally distributed. A Mauchly’s test indicated that the variances of populations were equal. Hence, we used a mixed-design ANOVA for the quantitative factors. We used a Mann-Whitney U test on the between-subjects questionnaire data.

5.6.1 Entry Speed.

An ANOVA identified a significant effect of method on entry speed (F1, 14 = 20.03, p < .001). Average entry speed with manual and automated were 5.8 wpm (SD = 2.2) and 3.9 wpm (SD = 1.5), respectively. There was also a significant effect of block (F4, 4 = 18.55, p < .0001). The method × block interaction effect was also statistically significant (F4, 56 = 3.90, p < .01). Fig. 8a illustrates average entry speed per block for both methods, fitted to power trendlines.

5.6.2 Error Rate.

An ANOVA failed to identify a significant effect of method on error rate (F1, 14 = 0.05, p = .82). Average error rates with manual and automated were 2.3% (SD = 7.1) and 2.8 (SD = 9.0), respectively. There was also no significant effect of block (F4, 4 = 0.61, p = .65). Fig. 8b illustrates average error rate per block for both methods, fitted to power trendlines.
Figure 9:
Figure 9: Median user ratings of the two methods on a 5-point Likert scale, where 1–5 signifies disagree–agree. Error bars represent ± 1 standard deviation (SD).

5.6.3 User Feedback.

A Mann-Whitney U test failed to identify a significant effect of method on perceived speed (U = 20.0, Z = −1.30, p = .19), accuracy (U = 28.0, Z = −0.47, p = .72), learnability (U = 28.0, Z = −0.49, p = .72), ease-of-use (U = 17.0, Z = −1.68, p = .13), or willingness-to-use (U = 26.0, Z = −0.66, p = .57). Fig. 9 illustrates median user ratings of the two methods.

5.7 Discussion

The manual Crownboard was significantly faster than the automated Crownboard (60% faster). This is not surprising as the 1,000 ms rotation interval adds to the total time needed to select a zone [72]. Since the participants did not have a motor impairment, they could easily rotate the crown at the desired speed and strategize rotation direction to improve entry speed. In the post-study debrief session, six out of eight participants of the manual group reported that they intentionally switched the crown’s rotation direction to reach the desired zone faster. If the intended zone was closer from the left (i.e., fewer number of zones between the current and the intended zone), they rotated the crown counterclockwise, and vice versa. There was a significant effect of block on entry speed. Learning occurred with both methods. Average entry speed over block correlated well with the power law of practice [83] for both manual (R2 = 0.94) and automated (R2 = 0.91). However, participants’ entry speed improved at a much faster rate with manual than automated (Fig. 8a). Entry speed with manual improved by 45% from the first block to the last, while the same with automated improved by 28%. A post hoc Tukey-Kramer Multiple-Comparison test identified three significantly different groups of blocks in manual: {1}, {2, 3}, {4, 5} and in automated: {1}, {2, 3, 4}, {5}. The fact that entry speed with automated improved by 10% from the fourth block to the last (Fig. 8a) suggests that the performance of this method is likely to improve further with practice. The methods were comparable in terms of accuracy. We did not observe learning between the blocks, instead, error rates were rather erratic (Fig. 8b), which is not unusual for word-based text entry methods [11]. When transcription errors did occur, participants misspelled entire words due to the selection of an incorrect zone or an incorrect word from the suggestion bar, or omitted them entirely.
Subjective data revealed that almost all participants found the method assigned to them fast, accurate, easy-to-learn, easy-to-use, and wanted to use it on their smartwatches. The remaining participants were neutral. This is surprising considering the techniques’ slower entry speed compared to the state-of-the-art text entry techniques for smartwatches [4]. This suggests that while the manual version may not be appropriate for motor impaired people, it could be an effective technique for people without disabilities. However, further investigation is needed to verify this.

6 The Shortest-path Crownboard

Results of the first study revealed that non-disabled people perform much better with the manual Crownboard than with the automated version. Yet, we intended to improve the performance of the automated version based on the focus group with people with limited dexterity (Section 3.3) that revealed a desire for an automated version since rotating the crown could be difficult at times.
Based on the findings that users tend to strategize the crown’s rotation direction (that they intentionally switch scanning direction to reach the desired zone faster), we implemented a shortest-path version of the automated Crownboard (Fig. 10). When users select the current zone zp by pressing the crown, Algorithm 1 automatically determines the scanning direction (clockwise or counterclockwise) to enable faster zone selection.
First, it finds the next probable letters (l1 and l2 in Algorithm 1) which are calculated using the bigram model described in Section 4.4. If the most probable letter l1 has significant probability (greater than P(l2) by at least 0.1 score), it chooses the shortest direction towards the zone that contains the letter l1, denoted as direction(zone(l1), zp). Otherwise, the algorithm makes decision based on the probability of zones instead of letters. It finds the most probable zones z1 and z2 where the probability of each zone is defined as the combined probability of letters in that zone. If the most probable zone z1 has a significant probability (greater than P(z2) by at least 0.1 score), it chooses the shortest direction towards the zone z1. Otherwise, it chooses the side that has the most probability (total sum of probabilities of zones in that side). The probability of the left side is denoted as \(P_\text{left}\), when Pleft > 0.5 it chooses scanning in the counterclockwise direction. Algorithm 2 computes the shortest-path direction between the current zone zp and the target zone z. It is achieved by counting the total number of zones between z and zp in both directions, then choosing the direction that has the least number of rotation steps. Fig. 10 demonstrates the process of entering words with shortest-path Crownboard.
Figure 10:
Figure 10: The process of entering the word “cares” with the shortest-path Crownboard. Initially, the keyboard starts to highlight the zones in a clockwise direction. When ‘c’ is selected, the keyboard switches scanning in counterclockwise direction since the most probable zones are on that side (combined probability: 0.9432). When the zone containing ‘a’ is selected, the keyboard continues highlighting the zones in counterclockwise direction since the next most probable letter ‘n’ (probability: 0.9432) is closer in that direction. This process continues until the word is confirmed by the user.

7 User Study 2: Clockwise Vs. Shortest-path Scanning

The purpose of this study was to investigate whether the shortest-path version of Crownboard improves performance in terms of entry speed and accuracy. This study was also conducted with non-disabled people for the reasons discussed in Section 5.

7.1 Participants

We had twelve participants in the study, recruited using the same procedure as the first user study (Section 5). None of them participated in the previous user studies. Table 6 presents their demographic information. None of them reported to have a condition that limited their fine motor skills. Each participant received U.S. $15 for volunteering.
Table 6:
AgeM = 28.7 years (SD = 7.2)
Gender6 female, 5 male, 1 non-binary
Handedness12 right
Experience with mobile devicesM = 12.1 years (SD = 5.6)
Experience with smartwatchesM = 1.0 year (SD = 2.1)
Table 6: Demographics of the participants in the second study.

7.2 Design

We used a within-subjects design for this user study, where the independent variables were: method (two conditions: clockwise, shortest-path) and block (5 blocks). We counterbalanced the conditions to reduce any potential effects of order. Each participant used both methods and entered short English phrases from the MacKenzie and Soukoreff [59] set in five blocks. Each block contained five random unique phrases from the set. In summary, the design was: 12 participants × 2 methods (clockwise, shortest-path) counterbalanced × 5 blocks × 5 random phrases = 600 phrases in total. The dependent variables were the same performance metrics recorded in the previous study (Section 5.4).

7.3 Procedure

We used the same procedure and safety measures as the previous study (Section 5.5). But unlike in the previous study, each participant practiced and entered text with both methods in a counterbalanced order (Fig. 11).
Figure 11:
Figure 11: Participants entering text with the clockwise (left) and the shortest-path Crownboard (right) in the second user study.

7.4 Results

A complete study session took about 60 minutes to complete, including demonstration, questionnaires, and breaks. There were no significant effects of the order of conditions on the dependent variables (p > .05), which suggests that counterbalancing worked [57, pp. 177–180]. A Shapiro-Wilk test revealed that the response variable residuals were normally distributed. A Mauchly’s test indicated that the variances of populations were equal. Hence, we used a repeated-measures ANOVA for the quantitative factors. We used a Wilcoxon Signed-Rank test on the within-subjects questionnaire data.
Figure 12:
Figure 12: (a) Average entry speed (wpm) and (b) error rate (%) per block, with both the clockwise and shortest-path Crownboard, fitted to power trendlines.

7.4.1 Entry Speed.

An ANOVA identified a significant effect of method on entry speed (F1, 11 = 6.11, p < .05). Average entry speed with clockwise and shortest-path were 3.5 wpm (SD = 1.2) and 3.1 wpm (SD = 1.1), respectively. There was also a significant effect of block (F4, 44 = 6.05, p < .001). However, method × block interaction effect was not statistically significant (F4, 44 = 0.49, p = .74). Fig. 12a illustrates average entry speed per block for both methods, fitted to power trendlines.

7.4.2 Error Rate.

An ANOVA identified a significant effect of method on error rate (F1, 11 = 10.18, p < .01). Average error rates with clockwise and shortest-path were 0.53% (SD =2.6) and 2.1% (SD = 7.4), respectively. However, there was no significant effect of block (F4, 44 = 0.42, p = .79) or method × block (F4, 44 = 0.41, p = .80). Fig. 12b illustrates average error rate per block for both methods, fitted to power trendlines.

7.4.3 User Feedback.

A Wilcoxon Signed-Rank test identified a significant effect of method on learnability (Z = −2.12, p < .05). However, no significant effect was identified on perceived speed (Z = −0.99, p = .32), accuracy (Z = −1.40, p = .16), ease-of-use (Z = −1.73, p = .08), or willingness-to-use (Z = −0.56, p = .58). Fig. 13 illustrates median user ratings of the two methods.
Figure 13:
Figure 13: Median user ratings of the two methods on a 5-point Likert scale, where 1–5 signifies disagree–agree. Error bars represent ± 1 standard deviation (SD). Red asterisk indicates a statistically significant difference.

7.5 Discussion

The average entry speed of the clockwise Crownboard is comparable to the previous study (3.5 vs. 3.9 wpm). It was significantly faster than the shortest-path Crownboard (13% faster). The post-study debrief session revealed that participants struggled with shortest-path since they could not anticipate the direction of the rotation, which prevented them from learning the method (also reported in Section 3.3.4). This is also reflected in the post-study questionnaire, where shortest-path was rated significantly more difficult to learn than clockwise (Fig. 13). It may be possible to facilitate the learning of the next probable zones by dynamically changing the background of the zones using gradients of the same color to indicate probability scores (deeper color: high probability, lighter color: low probability, like a heatmap). However, further investigation is needed to determine whether this improves entry speed by facilitating learning or not.
There was a significant effect of block on entry speed. Learning occurred to some extent with both methods. Average entry speed over block correlated moderately well with the power law of practice [83] for both clockwise (R2 = 0.8) and shortest-path (R2 = 0.8). Participants’ entry speed improved at a relatively faster rate with automated than shortest-path (Fig. 12a). Entry speed with automated improved by 22% from the first block to the last, while the same with shortest-path improved by 20%. Relevantly, a post hoc Tukey-Kramer Multiple-Comparison test identified three significantly different groups of blocks in clockwise: {1}, {2,3,4}, {5}, but no such difference was identified in shortest-path. The fact that entry speed with clockwise improved by 11% from the fourth block to the last (Fig. 12a) suggests that the performance of this method is likely to improve further with practice.
Interestingly, there was a significant effect of the method on error rate. Clockwise yielded a 74% lower error rate than shortest-path. This also supports the claim that participants had difficulties with shortest-path due to the unpredictable nature of its rotation. There was no significant effect of block or method × block, yet average error rate over block correlated moderately well with the power law of practice [83] for clockwise (R2 = 0.8). Hence, there is a chance that the effect of block on accuracy will reach statistical significance with larger sample size. We observed an unexpected peak in the error rate of shortest-path in block 3. A deeper investigation failed to identify a distinct phenomenon causing this, hence we speculate this to be an outlier. Note that we did not remove any outliers in data analysis.
Subjective data revealed that almost all participants found the examined methods fast, accurate, and easy-to-use. Participants found shortest-path the most difficult to learn, for the reasons discussed earlier. Besides, unlike the previous study, most participants were neutral about using the methods on their smartwatches.

8 User Study 3: Comparative Study with Representative Users

We compared the manual and the automated (clockwise) Crownboard with the default virtual keyboard on Wear OS (Qwerty with gesture typing and the predictive system enabled) in a user study involving people with limited motor skills. Based on the findings of the previous study, we excluded the shortest-path version from this study, as users had difficulties in learning the method.

8.1 Participants

Ten motor impaired volunteers took part in the study. They were recruited using the same procedure as the focus group study (Section 3). Table 7 presents their demographics information. P03 also participated in the focus group. All participants responded that they use physical Qwerty to enter text on desktop and laptop computers and virtual Qwerty to enter text on smartphones. P01 and P09 also used virtual Qwerty to enter text on tablet computers. P10, on the other hand, used a stylus to write on tablets. She also occasionally used speech to enter text on smartphones. Each participant received U.S. $40 for volunteering in the study.
Table 7:
IDGenderAgeDegreeEnglishPhone
(Years)
Watch
(Years)
GlassesCause of
Limited Dexterity
P01Woman72UniversityNative200.5YesAge-related limited dexterity
P02Woman48SecondaryNative140NoQuadriplegic paralyzed from the shoulders down, uses
the pinkie to type
P03Woman37UniversityNative190YesMultiple sclerosis
P04Man65SecondaryAdvance158YesDwarfism, arthritis
P05Woman65UniversityNative200YesAge-related limited dexterity
P06Man34UniversityNative51NoLimited dexterity due to hand structure
P07Woman30SecondaryAdvance42YesBrain injury related limited dexterity
P08Woman62SecondaryNative100YesRight carpal tunnel and shoulder surgery. Impingement
to jaw, elbow, and shoulder. Pain-nerve illness
P09Woman62UniversityNative20NoC5-6 quadriplegic, a complete spinal cord injury
P10Woman65UniversityNative120YesQuadriplegic, C5-6 spinal cord injury
Table 7: Participant demographics in the final study. “English” indicates proficiency in the language, “phone” and “watch” indicate experience with using smartphones and smartwatches in years, and “glasses” indicates whether participants wore corrective eyeglasses.

8.2 Design

The study used a within-subjects design. The independent variable was: method (three conditions: default Qwerty, manual, and automated Crownboard). All methods, including the Qwerty, were displayed on the smartwatch. The dependent variables were the performance metrics used in the previous studies (Section 5.4). Each participant used the three methods to enter short English phrases from the MacKenzie and Soukoreff [59] set in four blocks. Each block contained two random unique phrases from the set. In summary, the design was: 10 participants × 3 methods (default, [(manual, automated) counterbalanced]) × 4 blocks × 2 random phrases = 165 in total (160 phrases for manual and automated, and 5 phrases for default Qwerty).

8.3 Procedure

We used the same procedure and safety measures as the previous studies (Section 5.5), except for a few minor differences. First, due to the unavailability of adequate means of transportation to the participants, the study was conducted in locations convenient to them (Fig. 14). The researcher conducted a study with one participant at a time in a quiet place. Second, the default condition was always introduced first considering that participants might not be able to use it at all, while the other two conditions were counterbalanced. Third, we excluded the practice session to reduce the duration of the study. Finally, in addition to the usability questionnaire, participants completed the NASA-TLX questionnaire [36] to rate the examined methods’ perceived workload. This protocol was reviewed and approved by the Institutional Review Board (IRB).
Figure 14:
Figure 14: P06 and P10 entering text with the manual (left) and the automated (right) Crownboard, respectively, in the final study.

8.4 Results

A complete study session took about 60 minutes to complete, including demonstration, questionnaires, and interviews. In the study, none of the participants were able to complete all sessions with the default Qwerty. As a matter of fact, 70% of them (N = 7) were unable to transcribe even a single phrase with the method. The remaining participants (N = 3) could only enter 1–3 phrases, while committing numerous errors in the process. Their entry speed with manual (M = 2.67 wpm) and automated (M = 2.82 wpm) were 19% and 23% faster than Qwerty (M = 2.17 wpm). Likewise, their error rates with manual (M = 1.5%) and automated (M = 0%) were 83% and 100% lower than Qwerty (M = 8.9%). We, therefore, exclude Qwerty from all quantitative analyses.
A Shapiro-Wilk test revealed that the response variable residuals were normally distributed. A Mauchly’s test indicated that the variances of populations were equal. Hence, we used a repeated-measures ANOVA for all quantitative within-subjects factors. We used a Friedman test for the questionnaire data.

8.4.1 Entry Speed.

An ANOVA failed to identify a significant effect of method (manual, automated) on entry speed (F1, 9 = 1.57, p = .24). There was also no significant effect of block (F3, 27 = 2.34, p = .09). Average entry speed with manual and automated were 2.41 wpm (SD = 1.13) and 2.09 wpm (SD = 0.78), respectively. Fig. 15a illustrates average entry speed per block for the methods fitted to power trendlines. A t-test failed to identify a significant difference between participants with and without eyeglasses both in terms of manual (p = .46) and automated Crownboard (p = .74).

8.4.2 Error Rate.

An ANOVA failed to identify a significant effect of method (manual, automated) on error rate (F1, 9 = 0.40, p = .54). There was also no significant effect of block (F3, 27 = 1.12, p = .36). Average error rates for manual and automated Crownboard were 0.75% (SD = 3.85) and 1.22% (SD = 4.15), respectively. Fig. 15b illustrates average error rate per block for both methods fitted to power trendlines.
Figure 15:
Figure 15: (a) Average words per minute (wpm) and (b) error rate (%) per block for the user study with representative users, with both manual and automated Crownboard, fitted to power trendlines.

8.4.3 User Feedback.

A Friedman test identified significant effects of method (Qwerty, manual, automated) on perceived speed (χ2 = 14.29, df = 2, p < .001), accuracy (χ2 = 15.68, df = 2, p < .0001), ease of use (χ2 = 14.0, df = 2, p < .0001), and willingness to use (χ2 = 9.25, df = 2, p < .01). However, no significant effect was identified on learnability (χ2 = 0.25, df = 2, p = .88). Fig. 16a illustrates median user ratings of the three methods.

8.4.4 Task Load Index.

For analysis, we considered raw NASA-TLX scores by individual sub-scales, a common practice in the literature [35]. A Friedman test identified significant effects of method (Qwerty, manual, automated) on performance (χ2 = 11.7, df = 2, p < .01), effort (χ2 = 6.06, df = 2, p < .05), and frustration (χ2 = 6.73, df = 2, p < .05). However, there was no significant effects on mental demand (χ2 = 0.24, df = 2, p = .89), physical demand (χ2 = 1.52, df = 2, p = .47), or temporal demand (χ2 = 3.71, df = 2, p = .16). Fig. 16b illustrates median NASA-TLX ratings of all methods.
Figure 16:
Figure 16: (a) Median user ratings of the three methods on a 5-point Likert scale, where 1–5 signifies disagree–agree and (b) median raw NASA-TLX [35] scores of the examined methods on a 20-point scale, where 1–20 signifies very low–very high, except for “performance”, where 1–20 represent perfect–failure. Error bars represent ± 1 standard deviation (SD). Red asterisks indicate statistically significant differences.
Figure 17:
Figure 17: Average words per minute (wpm) per block for each of ten participants (P01–P10) for the user study with representative users, with both manual and automated Crownboard, fitted to power trendlines and default Qwerty.

8.5 Discussion

Participants yielded on average 2.1–2.4 wpm entry speed with Crownboard, which is encouraging considering that the median entry speed with commercial techniques for people with physical disabilities is 5.6 wpm on desktop platforms [47]. In fact, the proposed method performed better than some popular academic accessibility solutions for desktop platforms, which yield 1–4 wpm on average (see Table 1).
Only three participants (P01, P03, P05) were able to enter text with the default keyboard. P05 entered three phrases with a high error rate of 27% before giving up, while P01 and P03 could complete only one phrase each. These participants had mild to moderate motor impairments compared to the other participants. P01 and P05 had limited dexterity due to old age (mostly hand tremor), P03 showed early symptoms of multiple sclerosis, while the others had severe motor impairments (Table 7). Nevertheless, all participants were able to complete all blocks with the proposed methods, and those who could use the default method were 19–23% faster and 83-100% more accurate with the new methods. These results suggest that the proposed methods are more accessible to people with various levels of motor impairments. Subjective evaluation also supports this, where participants found the new methods significantly faster, more accurate, and easier to use than the default method (Fig. 16a). Although not statistically significant, we also find it encouraging that many participants found the proposed methods more (40%, N = 4) or as learnable (20%, N = 2) as the default method, especially because they all were users of Qwerty on their desktop and laptop platforms. Naturally, almost all of them (90%, N = 9) preferred to use these methods on their smartwatches than the default method (statistically significant), while one participant was neutral about it.
Interestingly, there were no statistically significant differences between the methods in terms of mental, physical, and temporal demands (Fig. 16). When enquired about this in the post-study interview, participants responded that they rated the demands of tapping on the display and rotating the crown, rather than the techniques under investigation. This confusion was caused by the phrasing of the questions, which asked, “How mentally demanding was the task?”, “How physically demanding was the task?”, and “How hurried or rushed was the pace of the task?”, where “the task” was interpreted as tapping and rotating. Note that we used the exact questions from the original questionnaire, without modifications of any kind. Participants felt that tapping on the screen and rotating the crown are somewhat comparable in mental, physical, and temporal demands, but the techniques associated with them make the difference in performance and effort. This is reflected in their responses to the subsequent questions, “How successful were you in accomplishing what you were asked to do?”, “How hard did you have to work to accomplish your level of performance?”, and “How insecure, discouraged, irritated, stressed, and annoyed were you?”. Participants found the new methods significantly better performed than the default method. They also felt that the default method required significantly more effort than the new ones, thus, were significantly more frustrated with it than the proposed ones (Fig. 16).
One interesting observation is that not all participants used the index finger to operate the crown. Six participants (P01, P03, P05, P06, P07, P08) used the index finger, one participant (P02) used the little finger, one participant used the middle finger (P04), while the remaining two participants (P09, P10) used the upper joint of the thumb. In Fig. 14, one can see P10 operating Crownboard using the upper joint of the thumb. Relevantly, the possibility of using the other fingers and the joints and knuckles was also mentioned in the focus group (Section 3.3.6). This is an inspiring finding since this extends the usability of the proposed methods beyond the scope of this work.
Manual Crownboard was about 13% faster and 37% more accurate than automated Crownboard, but these differences were not statistically significant (Fig. 15). This is because there was not a general trend in performance with these methods—some participants performed much better with one method and some with the other (Fig. 17). We speculate this is due to personal preferences as we failed to identify any effects of age or severity of impairment on performance with the methods. Some participants praised the automated method, while others criticized it. P02 liked the automated method as it does not require rotating the crown, while P04 criticized it, saying, “It takes longer [with the method] because if you miss the letter, you need to wait when it circles around.” There was no significant effect of block on entry speed. This is, presumably, due to the brief exposure to the methods. Participants entered only two phrases in four blocks. Yet, average entry speed over block correlated well with the power law of practice [83] for manual Crownboard (R2 = 0.96) and moderately for automated Crownboard (R2 = 0.54). This suggests that participants are likely to get much faster with both methods with practice. It is also important to note that the study used a fixed scanning interval of 1,000 ms. In theory, entry speed will increase with a shorter interval. Relevantly, prior research showed that users usually prefer and can use a much shorter dwell when they are more familiar with an approach [11]. The findings of the focus group also corroborate this (Section 3.3.4). However, further investigation is needed to fully explore this. The manual and automated Crownboard were comparable in terms of accuracy (0.0% vs. 0.45% ER). We did not find any significant differences between the methods in subjective analyses. As discussed earlier, participants liked both methods significantly better than the default method. These results indicate that both automated and manual Crownboard can be effective in enabling people with limited dexterity to enter text on smartwatches, but people with different levels of motor impairments are likely to prefer different versions of the keyboard.

9 Conclusion

We presented a series of user studies to evaluate and compare different versions of Crownboard. In the first study, we compared manual and automated clockwise scanning of Crownboard, where manual scanning was found to be faster and more accurate. In the second study, we compared automated clockwise scanning with a shortest-path scanning approach that scans towards the most probable zone. We found out that participants were unable to learn the shortest-path approach since they could not always anticipate the direction of the scan. Therefore, automated clockwise scanning was faster and more accurate than shortest-path. Finally, we conducted a study with ten people with limited dexterity to compared the default smartwatch Qwerty with both manual and automated Crownboard. In the study, most participants were unable to use the default Qwerty but all could use both versions of Crownboard. Participants who could enter text with both techniques reached 2.62 wpm with 0% error rate with manual and 2.25 wpm with 0.45% error rate with automated Crownboard, which were 9% and 23% faster than the default Qwerty.

10 Future Work

In the future, we will conduct a longitudinal study to investigate the learning of the methods. We will strengthen the decoder by applying machine learning approaches, such as long short-term memory (LSTM), transformer, etc. We will also improve the rotation algorithm of the shortest-path Crownboard.

Acknowledgments

This work was funded by a Hellman Fellowship [2]. We thank the San Francisco Mayor’s Office on Disability, San Francisco Independent Living Resource Center, Golden Gate Regional Center, Modesto Healthy Aging Association, and the Bay Area Outreach and Recreation Program for helping us with the recruitment process. We also thank all our participants for their patience and valuable feedback.

Footnote

1
Feature phone is a class of mobile phone that “retains the form factor of earlier generations of mobile telephones, typically with press-button based inputs and a small non-touch display” [1].

Supplementary Material

MP4 File (3544548.3580770-video-figure.mp4)
Video Figure
MP4 File (3544548.3580770-video-preview.mp4)
Video Preview
MP4 File (3544548.3580770-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
2022. Feature phone. https://en.wikipedia.org/w/index.php?title=Feature_phone&oldid=1118162309 Page Version ID: 1118162309.
[2]
Lorena Anderson. 2020. $3.5 Million Hellman Endowment Expands Future of Research at UC Merced. https://hsri.ucmerced.edu/news/2020/35-million-hellman-endowment-expands-future-research-uc-merced
[3]
Lisa Anthony, YooJin Kim, and Leah Findlater. 2013. Analyzing User-Generated Youtube Videos to Understand Touchscreen Use by People with Motor Impairments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). Association for Computing Machinery, New York, NY, USA, 1223–1232. https://doi.org/10.1145/2470654.2466158
[4]
Ahmed Sabbir Arif and Ali Mazalek. 2016. A Survey of Text Entry Techniques for Smartwatches. In Human-Computer Interaction. Interaction Platforms and Techniques, Masaaki Kurosu (Ed.). Vol. 9732. Springer International Publishing, Cham, 255–267. https://doi.org/10.1007/978-3-319-39516-6_24 Series Title: Lecture Notes in Computer Science.
[5]
Ahmed Sabbir Arif, Michel Pahud, Ken Hinckley, and Bill Buxton. 2014. Experimental Study of Stroke Shortcuts for a Touchscreen Keyboard with Gesture-Redundant Keys Removed. In Proceedings of Graphics Interface 2014(GI ’14). Canadian Information Processing Society, Toronto, ON, Canada, 43–50. http://dl.acm.org/citation.cfm?id=2619648.2619657 event-place: Montreal, Quebec, Canada.
[6]
Ahmed Sabbir Arif and Wolfgang Stuerzlinger. 2009. Analysis of Text Entry Performance Metrics. In 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH). 100–105. https://doi.org/10.1109/TIC-STH.2009.5444533
[7]
Behrooz Ashtiani and I. Scott MacKenzie. 2010. BlinkWrite2: An Improved Text Entry Method Using Eye Blinks. In Proceedings of the 2010 Symposium on Eye-Tracking Research; Applications (Austin, Texas) (ETRA ’10). Association for Computing Machinery, New York, NY, USA, 339–345. https://doi.org/10.1145/1743666.1743742
[8]
Melanie Baljko and Andrew Tam. 2006. Indirect Text Entry Using One or Two Keys. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (Portland, Oregon, USA) (Assets ’06). Association for Computing Machinery, New York, NY, USA, 18–25. https://doi.org/10.1145/1168987.1168992
[9]
Mohammed Belatar and Franck Poirier. 2008. Text Entry for Mobile Devices and Users with Severe Motor Impairments: Handiglyph, a Primitive Shapes Based Onscreen Keyboard. In Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility (Halifax, Nova Scotia, Canada) (Assets ’08). Association for Computing Machinery, New York, NY, USA, 209–216. https://doi.org/10.1145/1414471.1414510
[10]
Ian Carlos Campbell. 2021. Google Remembers Wear OS Long Enough to Add a New Keyboard. https://www.theverge.com/2021/5/6/22423707/google-wear-os-gboard-swipe-type
[11]
Steven J. Castellucci, I. Scott MacKenzie, Mudit Misra, Laxmi Pandey, and Ahmed Sabbir Arif. 2019. TiltWriter: Design and Evaluation of a No-Touch Tilt-Based Text Entry Method for Handheld Devices. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia(MUM ’19). ACM, New York, NY, USA, 1–8. https://doi.org/10.1145/3365610.3365629 Article 7.
[12]
Karen B. Chen, Anne B. Savage, Amrish O. Chourasia, Douglas A. Wiegmann, and Mary E. Sesto. 2013. Touch screen performance by individuals with and without motor control disabilities. Applied Ergonomics 44, 2 (2013), 297–302. https://doi.org/10.1016/j.apergo.2012.08.004
[13]
Xiang ’Anthony’ Chen, Tovi Grossman, and George Fitzmaurice. 2014. Swipeboard: A Text Entry Technique for Ultra-Small Interfaces That Supports Novice to Expert Transitions. In Proceedings of the 27th Annual Acm Symposium on User Interface Software and Technology(UIST ’14). ACM, New York, NY, USA, 615–620. https://doi.org/10.1145/2642918.2647354
[14]
Gennaro Costagliola, Mattia Rosa, Raffaele D’Arco, Sabato Gregorio, Vittorio Fuccella, and Daniele Lupo. 2019. C-QWERTY: a text entry method for circular smartwatches (S). In DMSVIVA. 51–57. https://doi.org/10.18293/DMSVIVA2019-014
[15]
Lucy Diep and Gregor Wolbring. 2013. Who Needs to Fit in? Who Gets to Stand out? Communication Technologies Including Brain-Machine Interfaces Revealed from the Perspectives of Special Education School Teachers Through an Ableism Lens. Education Sciences 3, 1 (March 2013), 30–49. https://doi.org/10.3390/educsci3010030 Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.
[16]
Sacha N. Duff, Curt B. Irwin, Jennifer L. Skye, Mary E. Sesto, and Douglas A. Wiegmann. 2010. The Effect of Disability and Approach on Touch Screen Performance during a Number Entry Task. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 54, 6(2010), 566–570. https://doi.org/10.1177/154193121005400605 arXiv:https://doi.org/10.1177/154193121005400605
[17]
Mark D. Dunlop and Andrew Crossan. 2000. Predictive Text Entry Methods for Mobile Phones. Personal Technologies 4, 2 (June 2000), 134–143. https://doi.org/10.1007/BF01324120
[18]
Mark D. Dunlop, Andreas Komninos, and Naveen Durga. 2014. Towards High Quality Text Entry on Smartwatches. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’14). Association for Computing Machinery, Toronto, Ontario, Canada, 2365–2370. https://doi.org/10.1145/2559206.2581319
[19]
Aarthi Easwara Moorthy and Kim-Phuong L. Vu. 2014. Voice Activated Personal Assistant: Acceptability of Use in the Public Space. In Human Interface and the Management of Information. Information and Knowledge in Applications and Services(Lecture Notes in Computer Science), Sakae Yamamoto (Ed.). Springer International Publishing, Cham, 324–334. https://doi.org/10.1007/978-3-319-07863-2_32
[20]
Aarthi Easwara Moorthy and Kim-Phuong L Vu. 2015. Privacy concerns for use of voice activated personal assistant in the public space. International Journal of Human-Computer Interaction 31, 4(2015), 307–335.
[21]
Christos Efthymiou and M. Halvey. 2016. Evaluating the Social Acceptability of Voice Based Smartwatch Search. In AIRS. https://doi.org/10.1007/978-3-319-48051-0_20
[22]
Silvia B. Fajardo-Flores, Laura S. Gaytán-Lugo, Pedro C. Santana-Mancilla, and Miguel A. Rodríguez-Ortiz. 2017. Mobile Accessibility for People with Combined Visual and Motor Impairment: A case Study. In Proceedings of the 8th Latin American Conference on Human-Computer Interaction(CLIHC ’17). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3151470.3151476
[23]
Torsten Felzer, Ian Scott MacKenzie, Philipp Beckerle, and Stephan Rinderknecht. 2010. Qanti: A Software Tool for Quick Ambiguous Non-standard Text Input. In Computers Helping People with Special Needs, Klaus Miesenberger, Joachim Klaus, Wolfgang Zagler, and Arthur Karshmer (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 128–135.
[24]
Torsten Felzer, I. Scott MacKenzie, and Stephan Rinderknecht. 2012. DualScribe: A Keyboard Replacement for Those with Friedreich’s Ataxia and Related Diseases. In Computers Helping People with Special Needs(Lecture Notes in Computer Science), Klaus Miesenberger, Arthur Karshmer, Petr Penaz, and Wolfgang Zagler (Eds.). Springer, Berlin, Heidelberg, 431–438. https://doi.org/10.1007/978-3-642-31534-3_64
[25]
Torsten Felzer and Rainer Nordmann. 2006. Alternative Text Entry Using Different Input Methods. In Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility(Assets ’06). Association for Computing Machinery, New York, NY, USA, 10–17. https://doi.org/10.1145/1168987.1168991
[26]
Torsten Felzer and Stephan Rinderknecht. 2009. 3DScan: An Environment Control System Supporting Persons with Severe Motor Impairments. In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, Pennsylvania, USA) (Assets ’09). Association for Computing Machinery, New York, NY, USA, 213–214. https://doi.org/10.1145/1639642.1639681
[27]
Leah Findlater, Karyn Moffatt, Jon E. Froehlich, Meethu Malu, and Joan Zhang. 2017. Comparing Touchscreen and Mouse Input Performance by People With and Without Upper Body Motor Impairments. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 6056–6061. https://doi.org/10.1145/3025453.3025603
[28]
Jun Gong, Zheer Xu, Qifan Guo, Teddy Seyed, Xiang ’Anthony’ Chen, Xiaojun Bi, and Xing-Dong Yang. 2018. WrisText: One-Handed Text Entry on Smartwatch Using Wrist Gestures. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). ACM, New York, NY, USA, 181:1–181:14. https://doi.org/10.1145/3173574.3173755
[29]
Mitchell Gordon, Tom Ouyang, and Shumin Zhai. 2016. WatchWriter: Tap and Gesture Typing on a Smartwatch Miniature Keyboard with Statistical Decoding. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). ACM, New York, NY, USA, 3817–3821. https://doi.org/10.1145/2858036.2858242
[30]
Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves. 2010. Towards Accessible Touch Interfaces. In Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility (Orlando, Florida, USA) (ASSETS ’10). Association for Computing Machinery, New York, NY, USA, 19–26. https://doi.org/10.1145/1878803.1878809
[31]
Tiago João Vieira Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves. 2010. Assessing Mobile Touch Interfaces for Tetraplegics. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services (Lisbon, Portugal) (MobileHCI ’10). Association for Computing Machinery, New York, NY, USA, 31–34. https://doi.org/10.1145/1851600.1851608
[32]
Aakar Gupta and Ravin Balakrishnan. 2016. DualKey: Miniature Screen Text Entry via Finger Identification. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, San Jose, California, USA, 59–70. https://doi.org/10.1145/2858036.2858052
[33]
Timo Götzelmann and Pere-Pau Vázquez. 2015. InclineType: An Accelerometer-Based Typing Approach for Smartwatches. In Proceedings of the XVI International Conference on Human Computer Interaction(Interacción ’15). Association for Computing Machinery, Vilanova i la Geltru, Spain, 1–4. https://doi.org/10.1145/2829875.2829929
[34]
Karin Harbusch and Michael Kühn. 2003. Towards an Adaptive Communication Aid with Text Input from Ambiguous Keyboards. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics - Volume 2 (Budapest, Hungary) (EACL ’03). Association for Computational Linguistics, USA, 207–210. https://doi.org/10.3115/1067737.1067786
[35]
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9 (Oct. 2006), 904–908. https://doi.org/10.1177/154193120605000909 Publisher: SAGE Publications Inc.
[36]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology. Vol. 52. Elsevier, Amsterdam, The Netherlands, 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9
[37]
Jonggi Hong, Seongkook Heo, Poika Isokoski, and Geehyuk Lee. 2015. SplitBoard: A Simple Split Soft Keyboard for Wristwatch-Sized Touch Screens. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15). ACM, New York, NY, USA, 1233–1236. https://doi.org/10.1145/2702123.2702273
[38]
Min-Chieh Hsiu, Da-Yuan Huang, Chi An Chen, Yu-Chih Lin, Yi-ping Hung, De-Nian Yang, and Mike Chen. 2016. Forceboard: Using Force as Input Technique on Size-Limited Soft Keyboard. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct(MobileHCI ’16). Association for Computing Machinery, Florence, Italy, 599–604. https://doi.org/10.1145/2957265.2961827
[39]
Apple Inc. 2019. Use Siri on your Apple Watch. https://support.apple.com/en-us/HT205184
[40]
Curt B. Irwin and Mary E. Sesto. 2012. Performance and touch characteristics of disabled and non-disabled participants during a reciprocal tapping task using touch screen technology. Applied Ergonomics 43, 6 (2012), 1038–1043. https://doi.org/10.1016/j.apergo.2012.03.003
[41]
Christina L. James and Kelly M. Reischel. 2001. Text Input for Mobile Devices: Comparing Model Prediction to Actual Performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI ’01). Association for Computing Machinery, New York, NY, USA, 365–371. https://doi.org/10.1145/365024.365300
[42]
Haiyan Jiang and Dongdong Weng. 2020. HiPad: Text entry for Head-Mounted Displays Using Circular Touchpad. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, Atlanta, GA, USA, 692–703. https://doi.org/10.1109/VR46266.2020.1581236395562
[43]
Yoonhyuk Jung, Seongcheol Kim, and Boreum Choi. 2016. Consumer valuation of the wearables: The case of smartwatches. Computers in Human Behavior 63 (2016), 899 – 905. https://doi.org/10.1016/j.chb.2016.06.040
[44]
Tomonari Kamba, Shawn A. Elson, Terry Harpold, Tim Stamper, and Piyawadee Sukaviriya. 1996. Using Small Screen Space More Efficiently. In Proceedings of the SIGCHI conference on Human factors in computing systems common ground - CHI ’96. ACM Press, Vancouver, British Columbia, Canada, 383–390. https://doi.org/10.1145/238386.238582
[45]
Jee-Eun Kim, Masahiro Bessho, and Ken Sakamura. 2019. Towards a Smartwatch Application to Assist Students with Disabilities in an IoT-enabled Campus. In 2019 IEEE 1st Global Conference on Life Sciences and Technologies (LifeTech). 243–246. https://doi.org/10.1109/LifeTech.2019.8883995
[46]
Ki Joon Kim. 2017. Shape and size matter for smartwatches: Effects of screen shape, screen size, and presentation mode in wearable communication. Journal of Computer-Mediated Communication 22, 3 (2017), 124–140.
[47]
Heidi Koester. 2018. Text Entry Rate for People with Physical Disabilities [infographic]. https://kpronline.com/blog/text-entry-rate-for-people-with-physical-disabilities-infographic/ Section: Computer Accessibility.
[48]
Andreas Komninos and Mark Dunlop. 2014. Text Input on a Smart Watch. IEEE Pervasive Computing 13, 4 (Oct. 2014), 50–58. https://doi.org/10.1109/MPRV.2014.77
[49]
Luis A. Leiva, Alireza Sahami, Alejandro Catala, Niels Henze, and Albrecht Schmidt. 2015. Text Entry on Tiny QWERTY Soft Keyboards. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 669–678. https://doi.org/10.1145/2702123.2702388
[50]
S. H. Levine and C. Goodenough-Trepagnier. 1990. Customised Text Entry Devices for Motor-Impaired Users. Applied Ergonomics 21, 1 (March 1990), 55–62. https://doi.org/10.1016/0003-6870(90)90074-8
[51]
Ning Li, Tuoyang Zhou, Yingwei Zhou, Chen Guo, Deqiang Fu, Xiaoqing Li, and Zijing Guo. 2019. Research on Human-Computer Interaction Mode of Speech Recognition Based on Environment Elements of Command and Control System. In 2019 5th International Conference on Big Data and Information Analytics (BigDIA). 170–175. https://doi.org/10.1109/BigDIA.2019.8802812
[52]
Yun-Lung Lin, Ming-Chung Chen, Ya-Ping Wu, Yao-Ming Yeh, and Hwa-Pey Wang. 2007. A Flexible On-Screen Keyboard: Dynamically Adapting for Individuals’ Needs. In Universal Access in Human-Computer Interaction. Applications and Services(Lecture Notes in Computer Science), Constantine Stephanidis (Ed.). Springer, Berlin, Heidelberg, 371–379. https://doi.org/10.1007/978-3-540-73283-9_42
[53]
Yun-Lung Lin, Ting-Fang Wu, Ming-Chung Chen, Yao-Ming Yeh, and Hwa-Pey Wang. 2008. Designing a Scanning On-Screen Keyboard for People with Severe Motor Disabilities. In Computers Helping People with Special Needs(Lecture Notes in Computer Science), Klaus Miesenberger, Joachim Klaus, Wolfgang Zagler, and Arthur Karshmer (Eds.). Springer, Berlin, Heidelberg, 1184–1187. https://doi.org/10.1007/978-3-540-70540-6_178
[54]
Motorola Mobility LLC. 2020. Voice commands – Moto 360 2nd Generation. https://support.motorola.com/us/en/products/wearables/moto-360-2nd-generation/documents/ms106996
[55]
Simon Lydell. 2015. English Bigram and Letter Pair Frequencies from the Google Corpus Data in JSON Format. https://gist.github.com/lydell/c439049abac2c9226e53
[56]
I. Scott MacKenzie. 2009. The One-Key Challenge: Searching for a Fast One-Key Text Entry Method. In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility(Pittsburgh, Pennsylvania, USA) (Assets ’09). Association for Computing Machinery, New York, NY, USA, 91–98. https://doi.org/10.1145/1639642.1639660
[57]
I. Scott MacKenzie. 2013. Human-Computer Interaction: An Empirical Research Perspective (first editioned.). Morgan Kaufmann is an imprint of Elsevier, Amsterdam.
[58]
I. Scott Mackenzie and Torsten Felzer. 2010. SAK: Scanning Ambiguous Keyboard for Efficient One-Key Text Entry. ACM Trans. Comput.-Hum. Interact. 17, 3 (July 2010), 11:1–11:39. https://doi.org/10.1145/1806923.1806925
[59]
I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In CHI ’03 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’03). ACM, New York, NY, USA, 754–755. https://doi.org/10.1145/765891.765971
[60]
Meethu Malu, Pramod Chundury, and Leah Findlater. 2018. Exploring Accessible Smartwatch Interactions for People with Upper Body Motor Impairments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3174062
[61]
Meethu Malu, Pramod Chundury, and Leah Findlater. 2019. Motor Accessibility of Smartwatch Touch and Bezel Input. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 563–565. https://doi.org/10.1145/3308561.3354638
[62]
Kyle Montague, Hugo Nicolau, and Vicki L. Hanson. 2014. Motor-Impaired Touchscreen Interactions in the Wild. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers; Accessibility (Rochester, New York, USA) (ASSETS ’14). Association for Computing Machinery, New York, NY, USA, 123–130. https://doi.org/10.1145/2661334.2661362
[63]
John T. Morris, W. Mark Sweatman, and Michael L. Jones. 2017. Smartphone Use and Activities by People with Disabilities: User Survey 2016. Journal on Technology & Persons with Disabilities 5 (2017), 50–66.
[64]
Martez E. Mott, Radu-Daniel Vatavu, Shaun K. Kane, and Jacob O. Wobbrock. 2016. Smart Touch: Improving Touch Accuracy for People with Motor Impairments with Template Matching. Association for Computing Machinery, New York, NY, USA, 1934–1946. https://doi.org/10.1145/2858036.2858390
[65]
Martez E. Mott and Jacob O. Wobbrock. 2019. Cluster Touch: Improving Touch Accuracy on Smartphones for People with Motor and Situational Impairments. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300257
[66]
Maia Naftali and Leah Findlater. 2014. Accessibility in Context: Understanding the Truly Mobile Experience of Smartphone Users with Motor Impairments. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility (Rochester, New York, USA) (ASSETS ’14). Association for Computing Machinery, New York, NY, USA, 209–216. https://doi.org/10.1145/2661334.2661372
[67]
Hugo Nicolau, Tiago Guerreiro, Joaquim Jorge, and Daniel Gonçalves. 2014. Mobile Touchscreen User Interfaces: Bridging the Gap Between Motor-Impaired and Able-Bodied Users. Universal Access in the Information Society 13, 3 (Aug. 2014), 303–313. https://doi.org/10.1007/s10209-013-0320-5
[68]
Peter Norvig. 2012. English Letter Frequency Counts: Mayzner Revisited or ETAOIN SRHLDCU. http://norvig.com/mayzner.html
[69]
Stephen Oney, Chris Harrison, Amy Ogan, and Jason Wiese. 2013. ZoomBoard: A Diminutive Qwerty Soft Keyboard Using Iterative Zooming for Ultra-Small Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’13). ACM, New York, NY, USA, 2799–2802. https://doi.org/10.1145/2470654.2481387
[70]
Laxmi Pandey, Khalad Hasan, and Ahmed Sabbir Arif. 2021. Acceptability of Speech and Silent Speech Input Methods in Private and Public. In Proceedings of the CHI Conference on Human Factors in Computing Systems(CHI ’21). ACM, New York, NY, USA, Yokohama, Japan, 13 pages. https://doi.org/10.1145/3411764.3445430
[71]
Ondrej Polacek, Zdenek Mikovec, Adam J. Sporka, and Pavel Slavik. 2011. Humsher: A Predictive Keyboard Operated by Humming. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility(ASSETS ’11). Association for Computing Machinery, New York, NY, USA, 75–82. https://doi.org/10.1145/2049536.2049552
[72]
Ondrej Polacek, Adam J. Sporka, and Pavel Slavik. 2017. Text Input for Motor-Impaired People. Univers. Access Inf. Soc. 16, 1 (March 2017), 51–72. https://doi.org/10.1007/s10209-015-0433-0
[73]
S. Prabhakar, S. Pankanti, and A.K. Jain. 2003. Biometric Recognition: Security and Privacy Concerns. IEEE Security Privacy 1, 2 (March 2003), 33–42. https://doi.org/10.1109/MSECP.2003.1193209 Conference Name: IEEE Security Privacy.
[74]
Vijit Prabhu and Girijesh Prasad. 2011. Designing a virtual keyboard with multi-modal access for people with disabilities. In 2011 World Congress on Information and Communication Technologies. 1133–1138. https://doi.org/10.1109/WICT.2011.6141407
[75]
Morten Proschowsky, Nette Schultz, and Niels Ebbe Jacobsen. 2006. An Intuitive Text Input Method for Touch Wheels. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada) (CHI ’06). Association for Computing Machinery, New York, NY, USA, 467–470. https://doi.org/10.1145/1124772.1124842
[76]
Ryan Qin, Suwen Zhu, Yu-Hao Lin, Yu-Jung Ko, and Xiaojun Bi. 2018. Optimal-T9: An Optimized T9-like Keyboard for Small Touchscreen Devices. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces(ISS ’18). Association for Computing Machinery, Tokyo, Japan, 137–146. https://doi.org/10.1145/3279778.3279786
[77]
Gulnar Rakhmetulla and Ahmed Sabbir Arif. 2020. Senorita: A Chorded Keyboard for Sighted, Low Vision, and Blind Mobile Users. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376576
[78]
Gulnar Rakhmetulla and Ahmed Sabbir Arif. 2021. SwipeRing: Gesture Typing on Smartwatches Using a Segmented Qwerty Around the Bezel. In Proceedings of the 2021 Graphics Interface Conference(GI ’21). Canadian Information Processing Society, Canada. https://doi.org/10.20380/GI2021.19
[79]
Zhanna Sarsenbayeva, Vassilis Kostakos, and Jorge Goncalves. 2019. Situationally-Induced Impairments and Disabilities Research. arXiv:1904.06128 [cs] (April 2019). http://arxiv.org/abs/1904.06128 arXiv:1904.06128.
[80]
Tomoki Shibata, Daniel Afergan, Danielle Kong, Beste F. Yuksel, I. Scott MacKenzie, and Robert J.K. Jacob. 2016. DriftBoard: A Panning-Based Text Entry Technique for Ultra-Small Touchscreens. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology(UIST ’16). ACM, New York, NY, USA, 575–582. https://doi.org/10.1145/2984511.2984591
[81]
Gwanseob Shin and Xinhui Zhu. 2011. User Discomfort, Work Posture and Muscle Activity While Using a Touchscreen in a Desktop Pc Setting. Ergonomics 54, 8 (Aug. 2011), 733–744. https://doi.org/10.1080/00140139.2011.592604 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/00140139.2011.592604.
[82]
Alexandru-Ionut Siean and Radu-Daniel Vatavu. 2021. Wearable Interactions for Users with Motor Impairments: Systematic Review, Inventory, and Research Implications. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility(ASSETS ’21). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3441852.3471212
[83]
G. S. Snoddy. 1926. Learning and Stability: A Psychophysiological Analysis of a Case of Motor Learning with Clinical Applications. Journal of Applied Psychology 10, 1 (1926), 1–36. https://doi.org/10.1037/h0075814 Place: US Publisher: American Psychological Association.
[84]
Adam J. Sporka, Torsten Felzer, Sri H. Kurniawan, Ondřej Poláček, Paul Haiduk, and I. Scott MacKenzie. 2011. CHANTI: Predictive Text Entry Using Non-Verbal Vocal Input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’11). Association for Computing Machinery, New York, NY, USA, 2463–2472. https://doi.org/10.1145/1978942.1979302
[85]
Alan Stewart. 2020. texting. https://www.britannica.com/technology/text-messaging
[86]
Kumiko Tanaka-Ishii, Yusuke Inutsuka, and Masato Takeichi. 2002. Entering text with a four-button device. In COLING 2002: The 19th International Conference on Computational Linguistics.
[87]
Takaki Tojo, Tsuneo Kato, and Seiichi Yamamoto. 2018. BubbleFlick: Investigating Effective Interface for Japanese Text Entry on Smartwatches. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services(MobileHCI ’18). Association for Computing Machinery, Barcelona, Spain, 1–12. https://doi.org/10.1145/3229434.3229455
[88]
Shari Trewin, Cal Swart, and Donna Pettick. 2013. Physical accessibility of touchscreen smartphones. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. 1–8.
[89]
Radu-Daniel Vatavu and Ovidiu-Ciprian Ungurean. 2022. Understanding Gesture Input Articulation with Upper-Body Wearables for Users with Upper-Body Motor Impairments. In CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3491102.3501964
[90]
Keith Vertanen, Dylan Gaines, Crystal Fletcher, Alex M. Stanage, Robbie Watling, and Per Ola Kristensson. 2019. VelociWatch: Designing and Evaluating a Virtual Keyboard for the Input of Challenging Text. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). ACM, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300821
[91]
Daniel Vogel and Patrick Baudisch. 2007. Shift: A Technique for Operating Pen-Based Interfaces Using Touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’07). ACM, New York, NY, USA, 657–666. https://doi.org/10.1145/1240624.1240727
[92]
David J. Ward, Alan F. Blackwell, and David J. C. MacKay. 2002. Dasher: A Gesture-Driven Data Entry Interface for Mobile Computing. Human–Computer Interaction 17, 2-3 (Sept. 2002), 199–228. https://doi.org/10.1080/07370024.2002.9667314 Publisher: Taylor & Francis _eprint: https://www.tandfonline.com/doi/pdf/10.1080/07370024.2002.9667314.
[93]
Katie E. Yancosek and Dana Howell. 2009. A Narrative Review of Dexterity Assessments. Journal of Hand Therapy 22, 3 (July 2009), 258–270. https://doi.org/10.1016/j.jht.2008.11.004
[94]
Yeliz Yesilada, Simon Harper, Tianyi Chen, and Shari Trewin. 2010. Small-Device Users Situationally Impaired by Input. Computers in Human Behavior 26, 3 (May 2010), 427–435. https://doi.org/10.1016/j.chb.2009.12.001
[95]
Xin Yi, Chun Yu, Weinan Shi, and Yuanchun Shi. 2017. Is It Too Small?: Investigating the Performances and Preferences of Users When Typing on Tiny Qwerty Keyboards. International Journal of Human-Computer Studies 106 (Oct. 2017), 44–62. https://doi.org/10.1016/j.ijhcs.2017.05.001
[96]
Xin Yi, Chun Yu, Weijie Xu, Xiaojun Bi, and Yuanchun Shi. 2017. COMPASS: Rotational Keyboard on Non-Touch Smartwatches. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). Association for Computing Machinery, Denver, Colorado, USA, 705–715. https://doi.org/10.1145/3025453.3025454

Cited By

View all
  • (2024)EmoWear: Exploring Emotional Teasers for Voice Message Interaction on SmartwatchesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642101(1-16)Online publication date: 11-May-2024
  • (2024)Lights, Camera, Access: A Closeup on Audiovisual Media Accessibility and AphasiaProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641893(1-17)Online publication date: 11-May-2024
  • (2023)SonarAuth: Using Around Device Sensing to Improve Smartwatch Behavioral BiometricsAdjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing10.1145/3594739.3610696(83-87)Online publication date: 8-Oct-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
April 2023
14911 pages
ISBN:9781450394215
DOI:10.1145/3544548
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 April 2023

Check for updates

Author Tags

  1. accessibility
  2. crown
  3. motor impairments
  4. smartwatch
  5. text entry
  6. virtual keyboard
  7. wristwatch

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CHI '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,323
  • Downloads (Last 6 weeks)199
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)EmoWear: Exploring Emotional Teasers for Voice Message Interaction on SmartwatchesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642101(1-16)Online publication date: 11-May-2024
  • (2024)Lights, Camera, Access: A Closeup on Audiovisual Media Accessibility and AphasiaProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641893(1-17)Online publication date: 11-May-2024
  • (2023)SonarAuth: Using Around Device Sensing to Improve Smartwatch Behavioral BiometricsAdjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing10.1145/3594739.3610696(83-87)Online publication date: 8-Oct-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media