Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3613904.3642017acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Towards Co-Creating Access and Inclusion: A Group Autoethnography on a Hearing Individual's Journey Towards Effective Communication in Mixed-Hearing Ability Higher Education Settings

Published: 11 May 2024 Publication History

Abstract

We present a group autoethnography detailing a hearing student’s journey in adopting communication technologies at a mixed-hearing ability summer research camp. Our study focuses on how this student, a research assistant with emerging American Sign Language (ASL) skills, (in)effectively communicates with deaf and hard-of-hearing (DHH) peers and faculty during the ten-week program. The DHH members also reflected on their communication with the hearing student. We depict scenarios and analyze the (in)effectiveness of how emerging technologies like live automatic speech recognition (ASR) and typing are utilized to facilitate communication. We outline communication strategies to engage everyone with diverse signing skills in conversations - directing visual attention, pause-for-attention-and-proceed, and back-channeling via expressive body. These strategies promote inclusive collaboration and leverage technology advancements. Furthermore, we delve into the factors that have motivated individuals to embrace more inclusive communication practices and provide design implications for accessible communication technologies within the mixed-hearing ability context.

1 Introduction

Based on the United States Census [29], an estimated 11.5 million Americans have various degrees of hearing loss, which make up 3.5% of the population. Many Deaf or Hard of Hearing (DHH) individuals use sign-language interpreting or real-time captioning for classroom lectures, meetings, or other events. While these services are beneficial for providing communication access, they are not broadly available at workplaces or education institutions, especially for teamwork [36]. DHH individuals often must find alternative solutions; studies conducted on the design and evaluation of captioning technology to support DHH individuals have been ongoing for a considerable period [15, 19, 20].
Recent research has focused on mixed-ability interactions between hearing individuals and those who are DHH, aiming for collective access using technology. McDonnell et al. analyzed social and environmental factors necessary for enabling collaborative approaches in the design of captioning technology [26, 27, 38]. Seita et al. studied how automatic speech recognition (ASR) affected hearing behaviors in lab settings [35, 36, 37], and co-designing tools for DHH and hearing teams [39]. While these studies stress hearing and DHH individuals’ shared responsibility for inclusive information access, immersive and longer-term studies on hearing learning to communicate with DHH colleagues effectively remain limited.
Therefore, in our ten-week study, we explored how non-signing hearing individuals learned to interact with DHH colleagues inclusively using technologies. We draw on the perspectives of both hearing and DHH individuals in a non-traditional sign-language-centric setting as shown in Figure 1. We focused on how the hearing individual adopted and adapted technology-mediated communication strategies, and how DHH individuals facilitated such strategies collaboratively and explored patterns of difference in an entangled world using group autoethnography.
Figure 1:
Figure 1: Sign language-centric setting (a) Enable signers to move through space uninterrupted (b, c) Enable signers to communicate at a distance where they can see facial expressions and the full dimension of the signer’s “signing space.” There is soft, diffused light attuned to eyes in presentation rooms and discussion spaces
Our contributions from this research are three-fold:
First, we provide practical technology-mediated communication strategies (e.g. directing visual attention) for hearing people without knowledge of sign language to communicate with DHH individuals effectively. In addition, we summarize the pros and cons of using common technologies in different social contexts in higher education and workspaces. Such strategies and technologies can be further supported by tactics, such as ironing the visual attention switches between technologies to decrease conversation breaks by sharing eye gaze between hearing and DHH individuals.
Secondly, our findings provide empirical evidence supporting the feasibility and significance of using technology in inclusive ways, regardless of the exact technology. Furthermore, we highlight the crucial role of DHH individuals as the majority participants in fostering inclusivity dedicated within non-traditional sign-language-centric settings. This also highlights the necessity for increased awareness and patience toward DHH individuals in traditional auditory-focused environments. Our study also contributes methodological insights by utilizing group autoethnography, facilitating the adoption of more comprehensive inclusive practices collaboratively.
The autoethnography method is frequently used by researchers to record and reflect individual thoughts or experiences. However, employing group autoethnography methods to collectively reflect group thinking and experience is rare. Thus, our third contribution centers on the methodology. By demonstrating how we conducted this group autoethnography to delve into a deeper, multifaceted exploration of interaction phenomena, we hope to provide empirical evidence and practical experience for future HCI researchers who wish to adopt or adapt this methodology.

2 Related Works

2.1 Accessibility in Hearing-DHH Collaborations

Emerging research has started to explore the design and utilization of assistive technology in a broader context, extending beyond individuals with a disability. A recent group autoethnographic study by Mack et al. offers an intricate understanding of a mixed-ability virtual team’s experience and the influence of the virtual setting on accessibility [22]. Their insights revealed elements that shaped the mixed-ability team’s accessibility: virtually induced (in)accessibility, power dynamics, remembering lengthy and conflicting accommodations, and allyship. This perspective envisions assistive technology as a type of collaborative effort. For instance, Branham and Kane analyzed the living dynamics of blind and sighted partners, unveiling their collaborative support strategies [5]. Xie et al. investigated remote-sighted assistance with paired volunteers to aid people with blindness [46]. Bennett et al. introduce the concept of interdependence, where access is viewed as a co-created and sustained outcome stemming from the relationship between people and things [3]. Past studies also emphasize that accessibility is not fixed for individuals, and accommodations extend beyond specific assistive technologies; instead, accessibility emerges through ongoing work involving continual mutual attention and adaptable work routines [13, 31]. In summary, collaborative efforts are emerging as a crucial approach to addressing accessibility issues, shifting the responsibility from individuals with disabilities.
With the current study’s emphasis on Deaf-hearing teams, we shift our attention to literature concerning collaborative practices and communication technology within Deaf-hearing interactions. Jain et al. documented graduate students with disabilities and their allies crafting customized accommodations to mitigate accessibility issues in real-time [17]. Jain et al.’s autoethnographic study over 2.5 years revealed how culture, society, and location shape communication technology use [16]. Despite the growth of technology to support DHH-Hearing communication, mostly ASR-based captioning apps, research with DHH users highlights how these apps are often inaccurate and inefficient for communication [24, 32, 39]. McDonnell et al. probed the social and environmental prerequisites for collaborative captioning technology design [26, 27, 38]. Seita et al. studied behavior shifts in hearing individuals using ASR, impacting DHH communication partners [35, 36, 37], and pursued co-designed captioning tools for both groups [39]. Others explored collaborative caption editing supported by machine learning [4]. Although McDonnell et al.’s and Seita et al.’s research partially covers hearing participants’ effort toward collective access, they mainly concentrate on ASR technology, which is only one of several communication methods used by Deaf and hearing communicators. Building upon these studies, our study investigates a range of technologies and examines how they contribute to facilitating collective access.

2.2 Deaf Culture, Communication, and Sign Language

Culturally Deaf individuals predominantly rely on visual rather than auditory channels for communication [2, 12, 30]. Instead of spoken language, many use sign languages such as American Sign Language (ASL) – distinct languages in their own right with phonological, morphological, syntactic, etc., structure [43]. This inclination toward visual communication is also apparent in their unique approach to interactions, particularly when collaborating with hearing individuals. These interactions encompass actions like reorganizing surroundings for optimal visibility, adapting signing pace and structure to the recipient’s signing proficiency, using visual-gestural cues instead of verbal calls, and integrating full-body iconic depictions into their utterances. For example, Rui et al. revealed the challenges and accessibility barriers that signers and interpreters face on videoconferencing platforms and suggested mediums that more fully support visual communication [33].
Prior research has delved into the interactions of Deaf individuals across diverse scenarios, as they adapt to mainstream auditory-centric settings. For example, Sandgren et al. explored the coordination of DHH children in mainstream classrooms, providing insights into the intricate dynamics of communication and adaptation within these settings [34]. Our research takes a different approach, focusing on how hearing individuals acquire and adhere to cultural and interactional norms in their communication with DHH individuals. This builds upon the work of Wang and Piper [45], who found that DHH-hearing pairs become skilled at adjusting to their partners’ communication preferences and developing strategies to handle the complex demands of visual communication in co-located collaborations.
Our study delves into the co-creation of technology-mediated communication in various real-life contexts and making use of emerging technology – specifically, ASR. We highlight the instrumental role played by DHH individuals in driving accessibility initiatives, particularly in sign-language-centric settings where their significant majority involvement is evident. Simultaneously, we encourage hearing individuals to reflect upon their responsibilities towards inclusivity, especially when they constitute the majority in auditory-centric environments. This contribution adds to the wider discourse on collaborative accessibility endeavors. Language, as a primary tool, facilitates the socialization process, enabling individuals to engage in culturally appropriate behaviors and interactions [21]. Our research also documents the ASL learning process, observing how a hearing individual becomes socialized within sign-language-centric environments and engages in culturally appropriate behaviors and interactions.

3 Background and Method

3.1 Ten Week Immersive Experience: Varied-Hearing & Signing Ability at REU Summer Program

Our project is conducted at a research site designated by the National Science Foundation for its Research Experiences for Undergraduates (REU) program with a focus on Accessible Information and Communications Technologies. This ten-week summer camp is open to all undergraduate applicants deaf or hearing who are interested in exploring relevant technologies. It is an extensive ten-week research program tailored for undergraduate students, with a specific focus on accessible information and communication technologies for DHH needs. It takes place at the world’s premier higher education institution serving DHH people with a bilingual learning environment, featuring ASL and English that provides full access for all students to learning and communication. Since 2014, the annual REU summer program has prepared many DHH and hearing undergrad attendees to pursue graduate studies, including two authors in this paper who after their initial camp experience, went on to achieve their PhD and rejoined the summer camp in 2023 as faculty mentors.

3.1.1 Membership: Undergraduate Participants, Graduate Research Assistants, and Faculty Mentors.

In 2023, the REU program had seven DHH and six hearing students. REU participants were immersed eight hours a day in a collaborative community and guided by eight faculty researchers specializing in accessible technology. The efforts of three graduate student assistants further supported all these activities, such as giving feedback on reports, daily progress check-in, and guidance on literature reading. Over half of the program’s faculty members and graduate student assistants were DHH. All members had at least intermediate ASL skills, except the current lead author of this research who had only basic ASL signs at the start of the summer camp.

3.1.2 Technology Infrastructure.

By default, camp members, both DHH and hearing, were expected to communicate in ASL in in-person settings in a physical location specially designed for signers, as depicted in Figure 1. Asynchronous communication, facilitated through Slack, was the platform for daily discussions on research projects alongside in-person interactions. Occasional virtual and hybrid sessions were conducted via Zoom to accommodate members who couldn’t be present on campus. The camp did not mandate or restrict the use of any ASR tools; instead, their usage was based on the communication needs and preferences of the members. The lead author, as the only hearing graduate research assistant, spent on average three hours a day in in-person interactions with the participants and one hour a day interacting with faculty mentors in various settings.

3.2 Autoethnography as Research Method

Autoethnography is a form of academic writing that combines personal experiences with analysis and interpretation [10]. It focuses on the author’s own life experience and connects his/her insights to aspects such as self-identity, cultural norms, and communication. In the field of accessibility, researchers have recently used autoethnographic methods to highlight the personal narratives of individuals with disabilities [14] as well as those who interact with individuals with disabilities [22]. While previous autoethnographic studies have predominantly focused on highlighting the experiences of users with disabilities as a minority group within the context of majority norms, our research provides a unique perspective by exploring the reverse scenario where the majority, hearing, becomes the minority.
Group autoethnography, sometimes similar to collaborative autoethnography, sets itself apart from its individual autoethnography by incorporating the experiences and insights of multiple researchers into a single study [6]. The process of group autoethnography typically unfolds in three main phases: 1) Data Collection: This involves gathering various types of data, such as memos, quotes, and personal reflections. The ’group’ element can be more pronounced in this phase, as was the case in the study by Kelly et al. [22], where each member documented their experiences individually yet within the context of the collective experience. 2) Meaning-Making: This phase is characterized by a comprehensive review of the collected data and the extraction of significant themes, often described as “ah” moments, where deeper insights emerge via data analysis. Our approach emphasizes the ’group’ aspect during this stage, with team members coming together for in-depth reflection meetings. These sessions, which are held in person, foster a shared interpretive process. 3) Writing: The final narrative can adopt various styles. The ’Analytical-interpretive’ style is typically more scholarly, drawing on theoretical frameworks and literature. As far as we know, there is no single prescribed method for incorporating the ’group’ aspect into these phases. Each research team may choose different points in the process to emphasize collaboration. Some papers chose to emphasize collaboration both in data collection and meaning-making, e.g. [1]. In our case, the collaborative spirit is most evident during the meaning-making phase, ensuring that the collective insights of the group shape the interpretation and conclusions of our writing.
Our group autoethnography team includes both DHH and hearing members as listed in Figure 2. In our case, the lead author, as a hearing graduate student assistant (H1), documented her personal reflections through field notes during interactions with DHH students, graduate student assistants, and faculty members. Then, four DHH faculty members (D1-D4) and one hearing faculty member (H2) would meet with H1 weekly in-person to discuss her notes. Our analysis focuses on how H1 learned the norms and the use of technology inclusively to navigate the sign-language-centric space with the weekly input from D1-D4 and H2 via a group autoethnography method.

3.2.1 Data Collection.

We present data collected during the entire ten weeks of the camp. Every week at the team meeting, H1 would share her experiences in various scenarios, sometimes successful and sometimes not so much. She would discuss the communication technologies and the tactics employed while interacting with the DHH individuals. In turn, the DHH team members would provide their personal observations and experiences, from REU or their previous experience, to aid the hearing individual in comprehending and assessing effectiveness, offering suggestions for enhancement in communication technologies and strategies. This collaborative introspection facilitated iterative improvements in strategies and, notably, empowered the hearing individual to independently grasp the nuances of the effectiveness of technology-mediated communication.
Table 1:
IDHearing StatusRole in REUPositionality
H1HearingResearch AssistantA Ph.D. student in Information Sciences, her research focus is on inclusive and accessible educational technology. She initially learned about 200 ASL vocabulary words using online resources before the summer program and further improved her proficiency during the program.
H2HearingFaculty MentorA faculty, that has taught DHH students for over 30 years. With her Ed. S. and Ph.D. in the field of computing technology in education, she has conducted numerous research projects with undergraduate DHH students regarding technology integration, e-learning, technology-supported learning solutions for special population, online education, instructional design and evaluation for blended learning, and learning assessment.
D1DeafFaculty MentorA postdoctoral researcher, uses hearing aids regularly. His primary language at work and his day-to-day life is ASL, and often communicates in spoken English as well. His current research focuses on leveraging artificial intelligence technologies to develop tools for accessibility for DHH people, and holds a Ph.D. in Psychology with a background in sign language linguistics and language development. He is also a former REU student himself from the summer of 2015, as an undergraduate student.
D2DeafFaculty MentorA postdoctoral researcher, uses hearing aids, and is fluent in written English and ASL. He has a Ph.D in Computing and Information Sciences. His research lies at the intersection of computer science, human-computer interaction and accessibility. He primarily works on accessibility for the DHH community, and has conducted studies investigating the design and usability of automatic captioning, automatic speech recognition technologies, and other accessible technologies. He is a former REU student (2015) and REU graduate student mentor (2016) of this program.
D3DeafFaculty MentorA professor and the Director of the Technology Access Program research group. He also co-directs the Accessible Human-Centered Computing graduate program. He has led large accessibility-related federal grants and federal contracts for the past ten years, and also co-directs the REU program. He has strong ties to DHH consumer advocates, and collaborates closely with them to disseminate research findings to policy makers and industry. He holds a Ph.D in Computer Science.
D4DeafFaculty MentorA Professor and Director of the Information Technology undergraduate program and Accessible Human-Centered Computing graduate program. With over fifteen years of experience in the accessible technology field, he brings a wealth of lived experience and research to the field. He focuses on strategic planning, local industry, alumni relations, and faculty support. He has a Ph.D. in Computer Science and Master of Laws (LLM) in Intellectual Property and Information Law and Juris Doctor (JD)
Table 1: Positionality statements of researchers involved in the group autoethnography
The strategies as reflection outcome in section 5.1 and reflection process in section 5.2 were collaboratively refined by all and evolved positively over time. Evaluations and refinements of our approaches were conducted both synchronously and asynchronously. The data for the findings section mainly comprised H1’s notes, D1-D4, and H2’s discussion during collaborative reflection meetings using ASL and verbal English, as well as responses to follow-up prompts in written English.
During synchronous interaction with DHH individuals in REU, H1 actively sought suggestions for improvement from the DHH individuals during their conversations, taking notes on the feedback received. H1 also documented instances of communication she perceived as ineffective, such as conversations that took longer than expected or instances of miscommunication she didn’t realize until later. As for asynchronous evaluations, these mainly took place during presentations where H1 reflected on her communication challenges and the strategies she had employed, referring to her notes from the synchronous interactions. Following this, D1-D4 and H2 would discuss the reasons behind these challenges (e.g., the importance of visual attention) and propose ways to improve communication. Additionally, D1-D4 provided feedback on H1’s interactions with them and others, recalling specific instances and sharing their insights. H1 also sent out follow-up prompts in written format to D1-D4 and H1 for them to respond only to H1 or to the group if preferred. Then H1 implemented the suggestions in subsequent interactions with others.

3.2.2 Data Analysis.

The notes taken by H1, and quotes collected during collaborative reflection and follow-up written discussion were analyzed using open, axial, and selective coding to articulate the social, cultural, and personal implications of mixed-ability environments [9, 22]. Similar to Jain et al. [16], the lead author H1 collected and organized reflection notes into initial themes (e.g., Communicate with whom, where and what, Challenges, Strategies, and Lessons Learned), which were then discussed at the weekly team meetings and resulted in critical revisions of adding, removing, or merging codes. In this process, we generated new reflections on the contributed data relevant to axial codes. The axial codes were then extracted into three overarching themes 1) technologies used in scenarios, 2) strategies for optimal technology adaptation in mixed-ability communication, and 3) how other team members/authors helped H1. Following best practices suggested by [11, 22], the group autoethnography team members/authors were involved in reviewing paper drafts continuously.

4 Findings- Common Synchronous Learning Scenarios and Technology Use

During the camp, H1 had the opportunity to engage in different scales of synchronous learning interactions with undergraduate participants and faculty mentors, both in-person and hybrid. In the sections below, H1 describes the four common scenarios, the technology used in the scenarios, the important non-verbal communication, and the commonly used ASL signs that support communication. The overview is provided in Table 2.

4.1 Scenario 1: Attending Large Presentations with Interpreters

We had certified ASL interpreters for our larger-scale events, which usually involved a group of 10 to 30 people (where all students and mentors were present). One such event was the weekly presentations, where DHH and hearing researchers, were invited to share their research topics. For H1, ASL interpreters played a crucial role in facilitating communication by conveying the ASL-based presentations in English. However, H1 still encountered a few challenges during these sessions that affected her understanding and level of participation.
Even with interpreters present, information can still get lost in translation. Occasionally, H1 would not fully understand the translation or feel like she was missing key contextual details and would notice that both the presenter and audience might insert additional comments into the conversation that would not always get formally translated, particularly when multiple people were signing at once. This sometimes led to confusion on H1’s part, prompting H1 to rely more on reading the presentation slides to better grasp the intended meaning. H1 had to sit in the center of the audience so that she could have a clear view of the slides. In some instances, when interacting with people sitting adjacent, H1 would turn to them and or tap their shoulder, and type to them on H1’s phone, “Did the presenter mean X?” The other person would sign “Yes” to confirm or “Later” to postpone the discussion for later, in order not to miss any information themselves. H1 accepted the limitations of proxy translations and chose to tolerate ambiguity instead of interrupting the presentation to ask questions directly. Meanwhile, H1 felt less motivated to actively participate, as she constantly felt excluded and was missing information.
Despite H1’s gradual improvement in ASL skills, the situation remained largely unchanged. At first, H1 attributed her misunderstandings to potential issues with the quality of interpretation, and her reluctance to disrupt the presentation flow by directly interacting with the interpreter. However, after explanations from the DHH participants, H1 realized it is important for hearing people to understand how ASL interpreters work and to learn to work with them. To overcome the language barrier during the presentation, H1 learned to ask for clarification by typing a short question on a laptop under the table, and showing it to someone sitting adjacent, allowing them to respond with the ASL signs in a less visually disruptive manner, as depicted in Figure 2. In this scenario, typing enables the conversation to be paused and resumed later for clarification, a feature not feasible with ASL or verbal communication. This experience has also made H1 realize the significance of time lag and information segmentation when assimilating knowledge from multiple modalities, particularly in the context of higher education. Furthermore, it has heightened H1’s awareness that visual information can be distracting to some individuals.

4.2 Scenario 2: Leading Small Group Presentations w/wo Interpreters

H1’s major presentation during the camp was when she presented a summary of themes and examples based on her work in the previous week to other autoethnography team members (mostly DHH) who sat around a table and looked at H1’s laptop screen displaying the presentation slides. The team had captions turned on, from Otter.AI, at the maximum size (as depicted in Figure 2) because interpreters were not available for that meeting. In the beginning, H1 encountered challenges in coordinating visual attention among the team, as some members preferred using their own devices for ASR and the captions on their device screens were invisible to others. The reason for this choice by DHH team members was that captions offered by Android apps, “Live Transcribe” were both more accurate and easier to read while viewing the captions on H1’s laptop screen was difficult from where some members sat.
To enhance the coordination of visual attention in this scenario, H1 took several actions, albeit without significantly altering the situation. For instance, H1 installed the preferred captioning app on her phone and positioned it on the table, creating a display for anyone around the table to view. H1 could also observe the speed at which the captions appeared and adjust her speaking pace and volume accordingly (as depicted in Figure 2). Another approach H1 used was to wait until everyone had shifted their gaze to her, away from reading ASR captions. Drawing from the guidance of other mentors, H1 used the sign “understand” or “keep going” in conjunction with raised eyebrows, which mark a question in ASL, to assess if everyone was following along. This technique proved helpful as it encouraged the audience to ask questions for clarification. Building upon H1’s own experience in the first scenario, where H1 relied on presentation slides for clarification, H1 modified her slides by enlarging the key information, using distinct colors and fonts, and incorporating illustrations that she personally drew to highlight the main points. The inclusion of typed text on the slides proved beneficial in rectifying ASR errors. Additionally, H1 utilized a pencil or highlighter to indicate specific words on her slides after the audience shifted their attention from the ASR, providing an additional visual attention cue. The aforementioned examples illustrate the need for hearing individuals to adopt various chronemics signals when communicating with DHH individuals using ASR technology. Directing visual attention and waiting for confirmation to continue are useful skills for working with ASR technology.
Table 2:
Common ScenariosPeople CountOccurrence over 10-weeksInterpreter?Technologies UsedNon-Verbal CuesBasic ASL Signs
In-person Attend Presentations10-3010AlwaysType on PhoneTap Shoulder“Yes” “Later”
In-person Make Presentations5-105SometimesASR on Shared Screen and Personal DevicesPoint at Screen, Wait for Audience Visual Attention“Keep Going” “(Don’t) Understand”
In-person Mentor Undergrads2-5DailyNeverASR on Shared Screen, Digital Pointer/ Highlighter, Type on Digital StickerPoint at Screen, Tap Shoulder, Shift Head Directions“Look At” “Say Again” “(Don’t) Understand”
Hybrid Mentor Undergrads2-5Bi-WeeklyNeverType on Zoom Chat, Type on Shared Online Doc, ASR on Zoom, Facetime, Slack HuddleWave Hand, Shift Body Directions, Wait for Audience Attention“Type” “Hold” “Come” “(Don’t) Understand”
Table 2: Four Common Learning Scenarios Experienced by H1. Such scenarios often do not include interpreters as they require advanced booking. As a result, REU participants, with varied signing/speaking abilities, depended on technology to facilitate communication.

4.3 Scenario 3: Informal F2F Mentoring Sessions without Interpreters

During the camp, informal in-person conversations were the primary mode of communication, and ASL interpreters were not available. In such instances, H1 had to depend on ASR and other alternative methods to ensure effective communication. In mentoring sessions, such as when H1 provided feedback on literature review reports to students, and their visual attention was focused on her laptop screen, H1 recognized the necessity to further enhance visual allocation. The basic setup is shown in Figure 2, where H1 utilized live captions through Zoom on her laptop and moved the captions around so they appeared directly below the current discussion material. Also, H1 employed a pencil/highlighter to point out specific words. Similar to scenario 2, H1 waited for a complete sentence to be fully displayed in the captions before using the pencil to direct attention. When it was necessary for a topic/question to be discussed among all, H1 used a digital sticker to spell out the questions so students knew exactly what to discuss and thereby eliminate potential ambiguity or confusion.
Meanwhile, new challenges emerged in these informal discussions, where some members felt the need to start their own conversation which led to an increased number of side conversations and caused frequent shifts in visual attention among multiple individuals and the screen. Additionally, since we were all facing the laptop, there were instances when we used a combination of signing, speaking, and sim-com1. In such cases, signs that specifically emphasized visual attention, like “Look” and “Again”, proved highly valuable. They were widely employed to prompt others to direct their gaze towards specific areas or to request repetition due to missing information.

4.4 Scenario 4: Informal Hybrid Mentoring Sessions without Interpreters

Hybrid meetings share the same purpose as scenario 3, and many strategies are applicable to both scenarios. Cameras and small screens pose challenges in hybrid settings. The members who were physically present on campus often utilized the same camera as an easy way to prevent audio echo issues (some DHH individuals preferred to have their audio on). Initially, the collocated members would sit closely next to each other, but as they began to engage in signing, they would naturally turn and face each other, moving out of the camera’s frame. In such situations, the remote member had to sign “Hold” and wave his/her hand to attract the attention of the collocated members.
Given the inaccuracy of ASR and some members’ lack of knowledge of signs for certain research terminology, some would switch to typing. If it was one of the collocated members typing in Zoom chat (sometimes the typing person disappears in the camera as shown in Figure 2), the other collocated members would sign “Typing” in the camera to let the remote member know where to look. One other visual attention direction strategy (learned and adopted from mentors) was to open a shared Google doc on a shared screen, where all members can see each other typing and clearly know where to look at. In summary, the issue of shared visual attention becomes more challenging in remote settings where the physical presence of individuals is not visible. Utilizing techniques such as shared typing documents can be a valuable additional modality that helps in allocating visual attention. This experience underscored the importance of adapting and adopting when using familiar technology with a new population.
Figure 2:
Figure 2: Four Common Learning Scenarios. The figures depict how the common ASL signs are used to support communication in each scenario: Scenario 1 “Yes” below the table to answer typed questions; Scenario 2 “Understand” to confirm if everyone follows; Scenario 3 “Look” to direct others’ visual attention, “Again” to request for others’ to repeat; Scenario 4 “Type” to inform remote participants of their actions and direct their visual attention, “Hold” to inform remote participants to wait.

5 Findings- Effectiveness of Technology-mediated Communication Strategies

In the four scenarios described above, the emphasis was placed on adopting and adapting visual communication practices used among DHH individuals. This emphasis extends beyond the mere acquisition of specific ASL signs (e.g., “Look” as depicted in Figure 2) and instead, the focus was on understanding the reasons for when and how the ASL signs are used to facilitate effective technology-mediated communication. Taken this way, learning to communicate effectively in sign-language-centric spaces involves not just a process of sign-language acquisition but socialization into modality- and community-specific communicative practices and values [25, 40]. The team observes an increasing use of technology to assist communication within the Deaf community and between DHH and hearing individuals. Presented below are three strategies that can also be useful to hearing individuals when interacting with DHH individuals, even if they don’t know ASL.

5.1 Overview of the (In)Effectiveness of Three Technology-Mediated Communication Strategies

The strategies refined collaboratively by DHH and hearing individuals, as evidenced below, were well received over time. The effective use of technology maintains the pace of communication and ensures accurate message conveyance.

5.1.1 Directing Visual Attention.

In scenarios 2-4, employing techniques such as finger pointing, digital highlights, using a pen, or utilizing sticky notes, alongside sharing screens with real-time typing, can aid in directing others’ visual attention, enabling them to quickly grasp where to focus their gaze for effective communication. The sign “Look At” (then point fingers to the location where to look)wasvery important to redirect people’s visual attention. This confirmed the findings in [45] that reported collocated DHH-hearing teams in learning to monitor and coordinate visual attention. Extending the work to hybrid DHH-hearing teams, where members are not physically present in the same location, they rely on small video thumbnails to stay engaged in the conversation. It becomes crucial for members to inform others about their activities when they momentarily step away from the camera or switch to another task, ensuring that everyone remains informed. For example, the sign “Type” can be used when they switch from signing to typing in Zoom chat or sending an email. DHH faculty mentor explained why visual attention was important to H1 and how to effectively guide visual attention:
I always take detailed notes on Google Docs and even do that in all-deaf teams when everyone knows ASL. My experience has shown that it greatly reduces miscommunications and misunderstandings about individual and shared responsibilities, irrespective of the mode of communication used. I suspect that it is partly due to the problem of split visual attention, which persists even in an all-ASL environment and is exacerbated by students who easily get distracted. – D3
More manual effort was needed especially with current technology designing visual cues based on voices:
Directing visual attention works well when the people or technology supports it – such as in Zoom where the window border of the active speaker is bolded. Unfortunately, most attention directing is designed around speakers, not around signers. – D4

5.1.2 Pause-and-Proceed.

Shared visual attention acts as a signal to proceed. However, if visual attention is not shared, users should aim to wait for shared attention to turn their way or employ approaches to capture shared attention if necessary, as discussed in the previous section. From scenario 2, an important lesson emerges: it’s critical for the presenter/speaker to wait until all members have shifted their focus away from captions and interpreters (if present) and confirm that the attention has turned to him/her. Only then can one proceed to converse. The sign “Hold” can be used to convey the need for others to wait, particularly in hybrid settings where other visual attention cues are well represented via platforms like Zoom (as depicted in scenario 4). In addition to pausing, it was essential to double-confirm using signs such as “Understand?” (as depicted in scenario 2) to ensure everyone was following along. Employing signs like “Again” (as depicted in scenario 3) can be used to request others to repeat and confirm their own understanding. These strategies are necessary to keep everyone on the same page and promote communication. One DHH faculty mentor elaborated on the significance of making sure everyone was on the same pace:
“Pause and proceed worked well in **anonymous** when everyone had an intuitive understanding of pausing when people were not paying attention by not looking in proximity. It did not work as well in meetings where some of the audience did not understand this.” – D4
... if visual attention is not maintained, then waving your hand to get attention is a good solution to make sure everyone is on track. Pause-and-proceed methods of capturing attention such as waving hands would work in moderation. If you try to lock eye contact and grab attention 20 times during a 15-minute presentation, for example, that would be excessive. If speaking, it’s probably good to also allow people to “interrupt” you with questions or clarification to help ease any misunderstandings. – D3
For hearing people, it was a learning process to be patient and sensitive to DHH individual’s needs for visual attention/focus. D1’s quote in section 5.2.2 explained how H1 improved herself towards the end of the summer camp. H2 also shared her classroom practice in getting students’ visual attention in a follow-up prompt. H2 observed that H1 looked anxious and seemed to be unsure how to capture the other’s attention.
... in a group environment, it’s difficult to get everyone’s visual focus as DHH individuals love to sign to each other. With signing, they can easily carry out a side conversation even when they do not sit together. For example, in a classroom, one student can sign to another student sitting on the other side of the room and carry out their own conversation. When this happens, I normally stop my lecturing and wait until one or more students in the class signalled to their peers and demanded them to stop their side conversation and pay attention so the class lecture can continue. – H2

5.1.3 Back-channeling via Expressive Body to Maintain Communication Flow.

Another strategy DHH faculties helped H1 to understand was the importance of providing visual acknowledgment in in-person communication. For example, nodding to show agreement or points understood; shaking head indicates disagreement or not understanding which would lead to asking for further clarification. It was a norm in Deaf culture to confirm in a conversation either positively (understand, agree, etc.) or negatively (do not understand, disagree, repeat, etc.) before the communication flow continued. Such dynamic feedback was especially important in technology-mediated communication as the current ASR does not offer a reliable feedback mechanism and transfer emotions accurately to facilitate mutual understanding.
The team finds it intriguing that all four DHH faculty members consistently signaled via body language whether or not they followed the meeting flow, and if repetition was necessary; whereas H2 only occasionally expressed her need for clarification. Faculty D1-D4 guided H1 to ask a person to repeat or clarify when necessary via human interpreters. Such practice leverages the interactivity facilitated by human interpreters. The DHH faculty mentors explained that it’s common for DHH individuals to openly and honestly express the need for clarification or repetition when they are lost in a conversation. It was a good practice to ensure both parties are on the same page in a conversation via backchannels, e.g. body language, to prevent communication breakdowns and maintain interactivity.
With the guidance of D1-D4, H1 has realized the importance of using facial expressions and body language, especially the role of eyebrows used to confirm, negate, or question in real-time conversations. These form a key component of ASL grammar and serve to distinguish across statements, questions, and negations [28]. Furthermore, even in situations where a specific sign was unknown, it was beneficial to act it out, gesture it, and maintain a signing modality to make it visual for others. In Figure 2, all characters have facial expressions and body orientations.

5.2 Iterative and Collaborative Reflection on the (In)Effective Strategies

The previous section offers practical insights for working with DHH individuals. Here, we detail the strategy development process, specifically highlighting how ongoing collaborative reflection assisted H1 in fostering more inclusive technology use and could be insightful for developing inclusive practices with other populations.

5.2.1 Hearing Not Always Aware of Communications Breakdowns - DHH individuals Explained Various Forms of Breakdowns.

DHH members aided H1 in identifying several communication breakdowns that were not initially evident to her. An illustrative instance was evident in scenario 4, where H1 sought clarification from someone else. This experience enabled H1 to comprehend that the assumption of auditory information accessibility for multitasking, valid for hearing individuals, didn’t hold true for DHH individuals. The following quote is from D1, who explained to H1 that a communication breakdown had occurred, which H1 had missed, believing the conversation to be proceeding smoothly.
... if I feel like the clarification would require time, then I am inclined not to give an explanation right away – though its not necessarily rude to ask, since sometimes the answer is indeed quick and easy to give. While this is reminiscent of ‘dinner table syndrome’ (deaf being told by hearing ‘its not important’ or ‘I’ll tell you later’) it does practically ask me to give up my own opportunity to gather information during a presentation (especially since as a deaf person I cannot rely on the spoken language interpretation to keep track of the presentation) – D1
Sometimes, more time and effort are needed for DHH individuals to catch up which was invisible and needs to be appreciated. For example, sim-com, on the whole, was not perceived as improving inclusivity in communication. It was viewed as an adjustment primarily borne by the DHH individuals to bridge the gap with the hearing community. However, concurrently, the hearing counterparts remained oblivious to these adaptations and formed assumptions concerning hearing and verbal capabilities. Similar to previous research, some members of our team have also expressed that sim-com was considered ungrammatical due to the mixing of English and ASL grammar [42]. The spoken message and the signed message produced during sim-com are not truly equal.
There are only bad options: sim-com, which distorts both my speaking and signing; typing, which is slow, or signing and then repeating what I signed by speech, which also is slow and inefficient. – D3
Sign language is my preferred modality and I do not want people to assume I can understand speech or am comfortable speaking – D1
Frequently overlooked as pleas for inclusivity, these adaptations led DHH individuals to consistently emphasize their need for accessible conversations or walk away. The quote was collected when H1 asked the team for more signals on communication breakdowns.
If they continue to speak without trying other forms of communication (typing, etc.) I will continue to indicate “deaf” by pointing to my ear and potentially just walking away. – D2

5.2.2 DHH Individuals Help Hearing Anticipate for Communications Breakdowns.

Three primary recommendations were embraced to mitigate potential communication breakdowns collected either via collective reflection or post-reflection follow-ups. H2 attentively observed H1’s struggles and actively provided feedback during follow-ups, as well as demonstrating actions in reality herself.
(1) Know Technology is Imperfect: proactively prepare supplementary visual materials and references, enabling users to understand ASR via cross-validation and discussions.
I think they (Automatic captions) function best in combination with other modalities – for example, if the hearing speaker already has slides ready or visual aids they can point to and comment on the relevant bullet points or diagrams, then reference the relevant parts of the automatic captions that captured the comments (and in doing so, check for themselves that the captions are accurate and check for understanding with team members). This is much better than speaking continuously without materials or checking captions. – D1
Furthermore, the group observed that more technologies are being used in communication among DHH individuals and with hearing individuals around them, beyond the camp. While these technologies are generally helpful, it was important to make communication breakdowns visible for both parties, as this can make collectively fixing them later on possible:
Auto-caption apps are growing in use – but they too effectively hide the access problem from hearing people, such that when a communication breakdown occurs, it is harder to repair–D1
(2) Prepare for Peak Communication Breakdowns. If possible, stay in a signing mode to iron out the transitioning between technologies.
One thing about typing on desktop or collaboratively is that I expect still there to be a lot of in-person communication – short responses to typed messages should be done through gesture rather than in typed modality (‘do you understand/ yes I understand’) and anything that the new signer knows how to sign, should be signed. I find the first author didn’t always do this and would stay in the typed modality ‘too much’ and also crucially, not initiate or search for eye contact enough (more so at the beginning of our interactions) – D1
Group settings (more than three in our context) have a high risk for passively listening. Group meetings are complex, with various communication methods. People must think about how others understand and respond. Differences in communication speed and turn-taking can cause pauses. Typing, slower but fine in small groups, can disrupt larger-scale discussions.
Typing on the phone worked well in casual one-on-one encounters with hearing individuals. I could quickly open the notepad app and since I am a quick typer/texter I can quickly get my ideas across. This method falls apart in group settings. It was not feasible to type on the phone and show it to multiple people, and how would they all respond back to me? On their own phone? Not the best solution.... I did not have a negative experience with collaborative typing (in a shared doc), however... it does not lend well to overlapping conversational voices. (If someone is typing while I am typing I have to stop typing to respond to their typing, and it gets complicated from there) – D2
We (two hearing individuals) used Google shared doc at small meetings with D1, it worked exceptionally well as all three of us are fluent and comfortable with the technology. However, when it was used at a bigger meeting with several deaf people... They started signing as soon as they saw the typed text in the shared doc and did not follow the collaborative typing protocol for turn-taking. – H2
(3) Inquire each DHH individual about their preferences on how to use each technology. Each DHH member tends to have a preferred method of utilizing specific technologies; for instance, D2 consistently emphasized the use of Slack for typing rather than resorting to written notes on paper because he has strong English skills, while typing for D4 was considered killing interactivity. H2 highlighted the importance of using DHH individuals’s preferred communication methods and technology with her 30 years of experience working with DHH students, especially understanding how they are used in different contexts. D4 introduced Live Transcribe app 2 as effective for quiet, small meetings. However, when using it at a restaurant dinner, it’s less effective as the device picks up noises from people talking at nearby tables. It echos D4’s introduction to H1 about the technology :
The autocaptions worked well in 1-1 meetings where there was no noise. The autocaptions did not work when there were many people – I did not know who was talking. Also, the autocaptions worked better for some speakers and not so well for other speakers.... Ask me what would help with communication, as I am in this situation most of the time; they’re not. – D4
The team also realized sometimes certain technology was preferred by some people in certain scenarios while others might not choose to use them, especially in group settings. For example, ASR-only was used more often in scenario 2 (in-person make presentations), while it did not work well by itself in scenario 3 (in-person mentor students). DHH individuals provided detailed rationales on why their technology-mediated communication preferences vary. DHH individuals have diverse signing, hearing, and language preferences within the community and their preferences impact their choices in using technology to support communication, especially large-scale conversations. The quote below was collected when H1 asked the group about when to use certain technology after she found several DHH individuals she interacted with refused to use ASR while some were more positive about it.
Auto captioning works well for receptive listening and for engaging with hearing people when I am the only deaf participant. In such scenarios, I am comfortable speaking for myself to respond. However, this breaks down if there are other deaf attendees, because now I have to ensure that they are not left out. I have a deaf accent, so auto captioning does not work reliably for my voice. That means that speaking alone, without the presence of an interpreter, is not workable... The main thing is to be respectful of my communication preferences. Technology is a tool that can be useful, but it needs to be used on my terms. That is, I have to have a say in what technology is used and how.– D3

5.2.3 Power Dynamics and Fixing Communication Breakdown.

Upon concluding the camp, we reflected on the power dynamics, especially between new and experienced signers in a sign-language-centric environment. This power dynamic, frequently encountered by DHH individuals in reality, heightened their awareness of the experiences faced by hearing individuals who are new to signing and navigating their way through the environment. With this understanding, the DHH faculties subtly introduced communication norms to the camp candidates before the camp started and let the campers see if they were okay with the environment. Initial interviews were conducted without interpreters, except when specifically requested by the candidates. This approach aimed to set clear expectations and evaluate the campers’ comfort and adaptability with alternative communication methods, rather than their proficiency in signing. Willingness to learn ASL was a plus but not a requirement.
The use of sim-com by experienced signers was a common method for including new signers, who are hearing, in conversations. However, it was not considered an effective and sustainable method for learning sign language. Sim-com was viewed as a technique that ’includes’ but does not necessarily ’center’ sign language, and its effectiveness largely ’depends on the hearing capabilities’ of the participants. H1 expressed a sense of inclusion when her DHH communication partner used sim-com, as it enabled her to pick up basic signs more easily. At the same time, she acknowledges that her focus on auditory information and reading lips limited her exposure to learning to express and communicate visually.
Communication breakdowns are inevitable in any scenario, and people with more power, in our context, experienced signers, said they should take the responsibility to fix breakdowns. More importantly, demonstrating flexibility in switching communication modality was acceptable, and treating miscommunication as a learning opportunity was highly desirable. Below two quotes are collected during post-camp reflection where all team members were asked to reflect on power dynamics during summer camp and what they would have done differently.
The people with less power are more hesitant to tell that they did not understand and to ask to repeat or switch to a different communication modality. The burden then shifts to the person with more power to monitor communication breakdowns and fix them as needed. – D4
... situations are more formal when there is a power imbalance, and it can be harder to be flexible in communication strategies – using props, switching between written/typed/signed/spoken modalities (the person in authority is responsible for indicating that being flexible is OK)... Recognizing my role in not just teaching people how to interact with me but with the community at large. My methods should and can be scalable. – D1

6 Discussions

Drawing from a collaborative reflection on an immersive experience of hearing individuals in a sign language-centric educational environment, we showcase multiple more nuanced approaches for hearing individuals to effectively communicate with DHH individuals via technology and suggest strategies for further access improvement by taking the social context into consideration when using common technologies (Table 3). Below, we provide design implications for inclusive communication technologies in the context of mixed-hearing/speaking/signing ability and methodological implications towards collaboratively creating mutual access.

6.1 Design Implication for Inclusive Communication Technology among DHH-Hearing

Table 3:
Common TechnologyPreferred Social ScaleEnvironment RequirementsBenefits for CommunicationPossibilities for Communication Breakdowns
Type on PhoneOne to OneInformalFastThe other person may not know how to respond
Speech via Auto Captioning (ASR)Maximum ThreeQuietMostly AccurateRequires supplementary material for accurate comprehension. Ineffective for certain hearing and DHH individuals.
Type Collaboratively on Shared DocMore than ThreeAvoid Side ConversationVery AccurateSlow. Visual attention to follow conversations is hard. Lack of interactivity when online/hybrid.
Table 3: Summary of different communication technology reflected via group autoethnography in findings. Strategies presented in section 5.1 should be applied in using, switching, and blending the use of different technologies.

6.1.1 “Typing” as Accurate and Complementary Communication Modality.

Typing enables more precise and comprehensive expression, especially when conveying precise and official terminology that may lack widely accepted sign equivalents. For example, previous research found visualization terms such as “line chart” do not have widely agreed signs [8]. We found typing complements communication in higher education across various scenarios. In scenario 1, it extends discussions by allowing more time for contributions. Presentation slides in scenarios 1 and 2 serve as visual aids to support discussions. In scenario 4, stickers and the Zoom chat feature clarify discussions, while real-time typing compensates for the absence of physical presence and enhances shared visual allocation. To ensure accurate communication, future communication technology must integrate tools for typing before, during, and after conversations. We corroborated Wang and Piper’s earlier findings, which demonstrated that Deaf-hearing pairs utilize a diverse array of communication strategies in co-located settings (such as speaking and typing) [45]. Additionally, we provided further insights into the complexities of communication strategies in group settings. Particularly, we emphasized the enhanced accuracy in communication that ’typing’ provides, in conjunction with the emerging use of ASR technology.

6.1.2 Fostering Shared Visual Allocation as “Spotlights” for Turn-Taking.

Particularly in mentoring and discussion scenarios characterized by frequent turn-taking (such as in scenarios 3 and 4), the physical presence in a hybrid setting can further complicate the situation [44]. It is crucial for communication technology to be more visually attention-aware and encourage hearing users to be mindful of the visual attention of DHH users. For instance, when the shared visual attention is not present, it signifies that no one should be talking or signing. The impact of time lag in ASR on communication effectiveness has been extensively discussed in prior research [14, 26, 27], which focuses on designing ASR for informal and interactive small-group conversations. Building upon these earlier works, we suggest that in higher education contexts, the alignment between multiple sources of information together with ASR further complicates group practices and should be acknowledged and clarified to ensure accurate understanding visually.

6.1.3 Ironing The Switch between Technologies.

As mentioned, the switch between technologies and between technology and humans are high-stakes scenarios for conversational breaks. Also as pointed out by D1 basic ASL can greatly facilitate inclusive communication when it is a short response compared with typed messages for better eye contact. Augmented reality for guiding visual cues could be designed to smooth the transitions. For example, shared eye gaze between hearing and DHH individuals explored in classrooms [18] can be used in a 3D space to guide visual attention in switches between technologies under various circumstances.

6.1.4 Making Two-Way Adaptions Visible to Increase Mutual Accommodation in Communication.

By observing how DHH individuals adjust their signing pace, utilize sim-com for explanations, and employ diverse communication modes, such as typing, to effectively communicate with hearing non-signers, other parties in the process were also motivated to make similar efforts to adapt. Another example is H1’s own improvement in making H1’s presentation slides succinct and with more figures to eliminate ambiguity and ensure clear understanding. These two-way adaptions enabled us to communicate without relying solely on a single technology, such as ASR. Through the process of adaptation and fixing breakdowns, which involved switching and combining different modalities, we gained a deeper understanding of individual preferences. D2 specifically mentioned he viewed miscommunication as learning opportunities for both parties. As D2 pointed out, some technologies ’hide the access problem from the hearing, making breakdowns harder to repair’, highlighting the need for both sides to have access to these breakdowns for effective repair. In light of this, future technologies can acknowledge and make visible each user’s adaptations and enhance their willingness to engage with technology for the purpose of collective access. Mack et al. suggested the importance of anticipating access needs and the necessity of reflection in establishing a norm of accessibility [23]. We further propose that adaptations are often reciprocal and can contribute significantly to collective access.

6.1.5 Learning Opportunities of New Signers.

In interactions with H1, a new signer, the team fleshed out the prevalent use of sim-com as a means to engage hearing newcomers in predominantly sign language environments. This approach, while inclusive to new-signers, was noted to inadvertently hinder the in-depth acquisition of sign language, and possibly hide access problems from hearing. Previous research on sim-com from a linguistic perspective found messages produced are not equivalent via spoken and signed at the same time [42] and not all signers voice. Employed with hesitation by both DHH and hearing individuals, sim-com underscores a potential design space for technology that emboldens hearing learners to embrace sign language while diminishing their reliance on auditory input. Additionally, for those who prefer or do not prefer but still use sim-com, technology can be designed to detect and mitigate discrepancies between the spoken and signed information.
Notably, the phenomenon of DHH individuals learning sign language later in life did not appear; the new signers encountered were primarily hearing. This could be attributed to the demographic of higher education settings are generally young. The experiences of older individuals with hearing loss, and their use of technology for communication and sign language learning represent a valuable direction for future research. Also, young DHH individuals don’t always have access to sign language and many learn it after 18.

6.2 Quick Steps, Messages Left: Towards Co-Creating Inclusiveness and Access

We confirmed prior research [45] that found teams establish accessibility through their real-time, in-person interactions and the evolving practices that develop over time. Furthermore, we unfold the adaption process in various real-life scenarios, how and why technologies are used. We argue that group autoethnography as a collaborative reflection process, can create opportunities for discussion and creating access together. Prior research found group autoethnography allowed researchers from different cultures to unfold “discomfort” in visiting cultural heritage sites, which can be provoked as a strategy to expose people to different perspectives of technology design [1]. Expanding similar methodology, our research suggests group autoethnography can both be deployed as a resource for designing more inclusive systems and those that can transform people’s understanding of using technology in inclusive ways.
As reflected in section 5.2, the collaborative reflection activities processes have contributed to H1’s learning and adaptive process, especially the weekly meetings that make in-time changes and adaptations possible. These encounters have enhanced H1’s skill in sign language, effective utilization of technology, and the ability to seamlessly transition between different modalities and technologies. At the same time, DHH individuals utilized their own experience as the minority in an auditory-centric world to provide empathy as well as actionable suggestions for H1, a person as a minority in the visual-centric world. It also allowed DHH individuals to provide suggestions about technologies that were not initially designed as accessibility technology to be used in accessible ways, more specifically offering more nuanced approaches to improving the use of technology in inclusive ways for wide populations. It prompts people to think about other parties involved in the communication:
...is expected to put in work and understand that the choices in communication are not necessarily chosen to optimize their ease of communication and understanding – sometimes they will miss information or have to work to communicate clearly or work harder to understand others... The method that might convey the information the ‘quickest’ (speaking alone in our study context) is usually not ideal. – D1

6.3 Limitations and Future Work

Our selection of autoethnography as a qualitative methodology stemmed from the incorporation of first-person approaches within HCI. It’s important to note that our expression of strategies is far from comprehensive. This aspect becomes even more evident considering the diverse range of signing, hearing, and language preferences that exist within the Deaf Community, as highlighted in our findings. We are also mindful that our design team shares common traits, extensive education in STEM disciplines, and experience in formal contexts like higher education. However, we have deliberate plans to significantly diversify our user pool in terms of backgrounds and preferences. Specifically, we wish to underscore that assumptions should not be made regarding the relationships between speaking, signing, and hearing abilities, especially when the body is central to ideation. As [41] found, physical bodies can express critiques towards technology design in more direct ways than participants might be used to on a language-based level (spoken or signed) and should be incorporated in accessibility research.
Furthermore, it remains uncertain whether a ten-week period of shared daily living and work experiences is adequate for producing comprehensive outcomes. During the initial weeks, the absence of a human interpreter occasionally hindered the efficient exchange of information, requiring additional effort for clarity. This might have influenced the depth of reflection in the earlier stages. Further research should also provide views from hearing individuals who worked longer with the community, such as H2 who has over 30 years of experience working at the university. Another contributing factor to the limited inclusion of quotes from H2 in the findings section was the substantial alignment between the ideas and confusions expressed by H1 and H2. This resonance was particularly pronounced in relation to H2’s experiences during her initial years at the university. The study’s limited participant number and unique context could significantly skew the results. While input from various DHH faculties adds depth, it’s uncertain if such insights are available elsewhere. Replicating this study in different settings could provide more comprehensive findings.
In our research, we highlight the unique dynamic that emerges when DHH individuals constitute a majority while those with typical hearing become the minority (H1). We found that in such scenarios, individuals with typical hearing often exhibit uncertainty and reluctance in seeking clarification, and need lots of feedback, suggestions, and encouragement from DHH individuals to learn to communicate. It’s notable that DHH individuals invest significant time and effort beyond their everyday interactions to educate and engage with H1. This underscores the likelihood of challenges for DHH individuals when advocating for themselves as a minority in a majority world, particularly in contexts outside of this specific university with a sign language environment. Addressing intricate power dynamics becomes essential for fostering inclusivity and empowerment for individuals in minority roles. For prospective researchers aiming to employ similar methodologies, it is essential to adopt strategies that promote a sense of comfort and willingness among the minority group to engage in communication. For example, social media might allow ethnic minorities to express themselves and clarify stereotypes to the majority in engaging, low-cost, and grass-rooted ways [7]. In summary, both majority and minority members must recognize the value of devoting time and effort to co-create access for a wider audience.

6.4 Reflecting on Group Autoethnography Method

Reflecting on the use of group autoethnography as a research approach, our team found it valuable to utilize H1’s experiences to expand upon personal narratives beyond the confines of the camp and to critically examine each other’s interpretations of shared observations. As outlined in our methodology, there is no universally accepted approach for integrating the ’group’ element at various stages of group autoethnography. Different research teams may choose distinct moments within the research process to conduct group work. For instance, some studies may prioritize collaboration during both data collection and the interpretive phases, as noted by Bala et al [1].
Our group’s collaborative dynamics were most prominent during the phase of interpretation, ensuring that the team’s collective wisdom was reflected in the analysis and the findings we presented. The team found it challenging for all team members to keep notes for the entire ten weeks, so we decided to center on H1’s experience and observations. Doing so was efficient for team productivity. Yet, we acknowledge that depending on H1’s notes alone was limiting as such notes might not be sufficiently detailed, or might be biased. As elaborated in our findings, the varied preferences among DHH individuals affect their willingness to advocate for others. Moreover, it’s not feasible to discuss every individual with whom H1 interacted in our collaborative sessions. This raises an ongoing question about the scope that an autoethnography group can expand in its discussions and reflections.

7 Conclusions

From a ten-week immersive experience in a sign-language-centric educational and research setting, we encapsulate nuanced approaches for hearing individuals to communicate effectively with DHH and showcase how DHH people go out of their way to accommodate the hearing individual in the signing environment and help her adapt and adopt the communication protocols suitable for the environment. Emphasizing the value of working with communication technologies, such as ASR, we delve into how acquiring ASL signs and visual cues can further enhance technology usage. We present three essential strategies for hearing people to engage with DHH individuals, even without ASL knowledge or an interpreter. Furthermore, we suggest design implications for accessible communication methods. Our case also demonstrates the effective use of group autoethnography as a methodology to reflect, discuss, analyze, and describe phenomena in real-world settings. Last but not least, we advocate for understanding and embracing diversity in culture, language, and ability among all people. Together, we will build a more inclusive society through collaborative efforts and accessible technology.

Acknowledgments

This report is based upon work supported by the National Science Foundation under Grant No. 2150429 and No. 2118824. The authors would like to thank the undergrad participants, faculty mentors, and graduate research assistants at Gallaudet University’s REU Site and Dr. Yun Huang, Si Chen’s Ph.D. advisor for their guidance and valuable input. The authors would also like to thank the anonymous reviewers for their efforts and valuable comments.

Footnotes

1
Sim-Com is an abbreviation meaning simultaneous communication. It is the act of communicating in sign language and spoken language at the same time and is often used as a form of communication between people who are DHH and people who are hearing.

Supplemental Material

MP4 File - Video Preview
Video Preview
Transcript for: Video Preview
MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation

References

[1]
Paulo Bala, Pedro Sanches, Vanessa Cesário, Sarah Leão, Catarina Rodrigues, Nuno Jardim Nunes, and Valentina Nisi. 2023. Towards Critical Heritage in the wild: Analysing Discomfort through Collaborative Autoethnography. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.
[2]
H Dirksen Bauman and Joseph Murray. 2009. Reframing: From hearing loss to deaf gain. Deaf Studies Digital Journal 1, 1 (2009), 1–10.
[3]
Cynthia L Bennett, Daniela K Rosner, and Alex S Taylor. 2020. The care work of access. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[4]
Bhavya Bhavya, Si Chen, Zhilin Zhang, Wenting Li, Chengxiang Zhai, Lawrence Angrave, and Yun Huang. 2022. Exploring collaborative caption editing to augment video-based learning. Educational technology research and development 70, 5 (2022), 1755–1779.
[5]
Stacy M Branham and Shaun K Kane. 2015. Collaborative accessibility: How blind and sighted companions co-create accessible home spaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 2373–2382.
[6]
Heewon Chang. 2013. Individual and collaborative autoethnography as method. Handbook of autoethnography (2013), 107–122.
[7]
Si Chen, Xinyue Chen, Zhicong Lu, and Yun Huang. 2023. " My Culture, My People, My Hometown": Chinese Ethnic Minorities Seeking Cultural Sustainability by Video Blogging. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1–30.
[8]
Si Chen, Desirée Kirst, Qi Wang, and Yun Huang. 2023. Exploring Think-aloud Method with Deaf and Hard of Hearing College Students. In Proceedings of the 2023 ACM Designing Interactive Systems Conference. 1757–1772.
[9]
Juliet M Corbin and Anselm Strauss. 1990. Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative sociology 13, 1 (1990), 3–21.
[10]
Sally Jo Cunningham and Matt Jones. 2005. Autoethnography: a tool for practice and education. In Proceedings of the 6th ACM SIGCHI New Zealand chapter’s international conference on Computer-human interaction: making CHI natural. 1–8.
[11]
Carolyn Ellis and Art Bochner. 2000. Autoethnography, personal narrative, reflexivity: Researcher as subject. (2000).
[12]
Michele Friedner and Annelies Kusters. 2015. It’s a small world: International deaf spaces and encounters. Gallaudet University Press.
[13]
Megan Hofmann, Devva Kasnitz, Jennifer Mankoff, and Cynthia L Bennett. 2020. Living disability theory: Reflections on access, research, and design. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–13.
[14]
Sarah Homewood. 2023. Self-Tracking to Do Less: An Autoethnography of Long COVID That Informs the Design of Pacing Technologies. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–14.
[15]
Dhruv Jain, Bonnie Chinh, Leah Findlater, Raja Kushalnagar, and Jon Froehlich. 2018. Exploring augmented reality approaches to real-time captioning: A preliminary autoethnographic study. In Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems. 7–11.
[16]
Dhruv Jain, Audrey Desjardins, Leah Findlater, and Jon E Froehlich. 2019. Autoethnography of a hard of hearing traveler. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility. 236–248.
[17]
Dhruv Jain, Venkatesh Potluri, and Ather Sharif. 2020. Navigating graduate school with a disability. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–11.
[18]
Raja S Kushalnagar, Poorna Kushalnagar, and Jeffrey B Pelz. 2012. Deaf and Hearing Students’ Eye Gaze Collaboration. In Computers Helping People with Special Needs: 13th International Conference, ICCHP 2012, Linz, Austria, July 11-13, 2012, Proceedings, Part I 13. Springer, 92–99.
[19]
Raja S Kushalnagar, Walter S Lasecki, and Jeffrey P Bigham. 2013. Captions versus transcripts for online video content. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility. 1–4.
[20]
Raja S Kushalnagar and Christian Vogler. 2020. Teleconference accessibility and guidelines for deaf and hard of hearing users. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–6.
[21]
Jin Sook Lee and Mary Bucholtz. 2015. Language socialization across learning spaces. The handbook of classroom discourse and interaction (2015), 319–336.
[22]
Kelly Mack, Maitraye Das, Dhruv Jain, Danielle Bragg, John Tang, Andrew Begel, Erin Beneteau, Josh Urban Davis, Abraham Glasser, Joon Sung Park, 2021. Mixed Abilities and Varied Experiences: a group autoethnography of a virtual summer internship. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility. 1–13.
[23]
Kelly Mack, Emma McDonnell, Venkatesh Potluri, Maggie Xu, Jailyn Zabala, Jeffrey Bigham, Jennifer Mankoff, and Cynthia Bennett. 2022. Anticipate and adjust: Cultivating access in human-centered methods. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–18.
[24]
James R Mallory, Michael Stinson, Lisa Elliot, and Donna Easton. 2017. Personal perspectives on using automatic speech recognition to facilitate communication between deaf students and hearing customers. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. 419–421.
[25]
Aron S Marie. 2020. Finding interpreters who can “OPEN-THEIR-MIND”: How Deaf teachers select sign language interpreters in Hà N ǒi, Viêt Nam. Sign language ideologies in practice (2020), 129–144.
[26]
Emma J McDonnell, Ping Liu, Steven M Goodman, Raja Kushalnagar, Jon E Froehlich, and Leah Findlater. 2021. Social, environmental, and technical: Factors at play in the current use and future design of small-group captioning. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–25.
[27]
Emma J McDonnell, Soo Hyun Moon, Lucy Jiang, Steven M Goodman, Raja Kushalnagar, Jon E Froehlich, and Leah Findlater. 2023. “Easier or Harder, Depending on Who the Hearing Person Is”: Codesigning Videoconferencing Tools for Small Groups with Mixed Hearing Status. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–15.
[28]
Carol Jan Neidle. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. MIT press.
[29]
Colorado Department of Human Services. 2022. Deaf, Hard of Hearing, and DeafBlind Demographics Guide. https://ccdhhdb.com/wp-content/uploads/2022/09/DHHDB-Demographics.pdf
[30]
Carol Padden, Tom Humphries, and Carol Padden. 2009. Inside deaf culture. Harvard University Press.
[31]
Leah Lakshmi Piepzna-Samarasinha. 2018. Care work: Dreaming disability justice. arsenal pulp press Vancouver.
[32]
Soraia Silva Prietch, Napoliana Silva de Souza, and Lucia Villela Leite Filgueiras. 2014. A Speech-To-Text System’s Acceptance Evaluation: Would Deaf Individuals Adopt This Technology in Their Lives?. In Universal Access in Human-Computer Interaction. Design and Development Methods for Universal Access: 8th International Conference, UAHCI 2014, Held as Part of HCI International 2014, Heraklion, Crete, Greece, June 22-27, 2014, Proceedings, Part I 8. Springer, 440–449.
[33]
Jazz Rui Xia Ang, Ping Liu, Emma McDonnell, and Sarah Coppola. 2022. “In this online environment, we’re limited”: Exploring Inclusive Video Conferencing Design for Signers. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–16.
[34]
Olof Sandgren, Richard Andersson, Joost van de Weijer, Kristina Hansson, and Birgitta Sahlén. 2014. Coordination of gaze and speech in communication between children with hearing impairment and normal-hearing peers. Journal of Speech, Language, and Hearing Research 57, 3 (2014), 942–951.
[35]
Matthew Seita, Khaled Albusays, Sushant Kafle, Michael Stinson, and Matt Huenerfauth. 2018. Behavioral changes in speakers who are automatically captioned in meetings with deaf or hard-of-hearing peers. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. 68–80.
[36]
Matthew Seita, Sarah Andrew, and Matt Huenerfauth. 2021. Deaf and hard-of-hearing users’ preferences for hearing speakers’ behavior during technology-mediated in-person and remote conversations. In Proceedings of the 18th International Web for All Conference. 1–12.
[37]
Matthew Seita and Matt Huenerfauth. 2020. Deaf individuals’ views on speaking behaviors of hearing peers when using an automatic captioning app. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8.
[38]
Matthew Seita, Sooyeon Lee, Sarah Andrew, Kristen Shinohara, and Matt Huenerfauth. 2022. Remotely Co-Designing Features for Communication Applications Using Automatic Captioning with Deaf and Hearing Pairs. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 460, 13 pages. https://doi.org/10.1145/3491102.3501843
[39]
Matthew Seita, Sooyeon Lee, Sarah Andrew, Kristen Shinohara, and Matt Huenerfauth. 2022. Remotely Co-Designing Features for Communication Applications using Automatic Captioning with Deaf and Hearing Pairs. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–13.
[40]
Jenny L Singleton and Peter K Crume. 2022. The socialization of modality capital in sign language ecologies: A classroom example. Frontiers in Psychology 13 (2022), 934649.
[41]
Katta Spiel and Robin Angelini. 2022. Expressive Bodies Engaging with Embodied Disability Cultures for Collaborative Design Critiques. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. 1–6.
[42]
Stephanie Tevenal and Miako Villanueva. 2009. Are you getting the message? The effects of SimCom on the message received by deaf, hard of hearing, and hearing students. Sign Language Studies 9, 3 (2009), 266–286.
[43]
Clayton Valli and Ceil Lucas. 2000. Linguistics of American sign language: An introduction. Gallaudet University Press.
[44]
Christian Vogler, Paula Tucker, and Norman Williams. 2013. Mixed local and remote participation in teleconferences from a deaf and hard of hearing perspective. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. 1–5.
[45]
Emily Q Wang and Anne Marie Piper. 2018. Accessibility in action: Co-located collaboration among deaf and hearing professionals. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 1–25.
[46]
Jingyi Xie, Rui Yu, Kaiming Cui, Sooyeon Lee, John M. Carroll, and Syed Masum Billah. 2023. Are Two Heads Better than One? Investigating Remote Sighted Assistance with Paired Volunteers. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (Pittsburgh, PA, USA) (DIS ’23). Association for Computing Machinery, New York, NY, USA, 1810–1825. https://doi.org/10.1145/3563657.3596019

Index Terms

  1. Towards Co-Creating Access and Inclusion: A Group Autoethnography on a Hearing Individual's Journey Towards Effective Communication in Mixed-Hearing Ability Higher Education Settings
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
        May 2024
        18961 pages
        ISBN:9798400703300
        DOI:10.1145/3613904
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 11 May 2024

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. American Sign Language
        2. DHH
        3. Higher Education
        4. Mixed-Ability

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        Conference

        CHI '24

        Acceptance Rates

        Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

        Upcoming Conference

        CHI '25
        CHI Conference on Human Factors in Computing Systems
        April 26 - May 1, 2025
        Yokohama , Japan

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 814
          Total Downloads
        • Downloads (Last 12 months)814
        • Downloads (Last 6 weeks)161
        Reflects downloads up to 19 Nov 2024

        Other Metrics

        Citations

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media