Towards Co-Creating Access and Inclusion: A Group Autoethnography on a Hearing Individual's Journey Towards Effective Communication in Mixed-Hearing Ability Higher Education Settings
Abstract
1 Introduction
2 Related Works
2.1 Accessibility in Hearing-DHH Collaborations
2.2 Deaf Culture, Communication, and Sign Language
3 Background and Method
3.1 Ten Week Immersive Experience: Varied-Hearing & Signing Ability at REU Summer Program
3.1.1 Membership: Undergraduate Participants, Graduate Research Assistants, and Faculty Mentors.
3.1.2 Technology Infrastructure.
3.2 Autoethnography as Research Method
3.2.1 Data Collection.
ID | Hearing Status | Role in REU | Positionality |
H1 | Hearing | Research Assistant | A Ph.D. student in Information Sciences, her research focus is on inclusive and accessible educational technology. She initially learned about 200 ASL vocabulary words using online resources before the summer program and further improved her proficiency during the program. |
H2 | Hearing | Faculty Mentor | A faculty, that has taught DHH students for over 30 years. With her Ed. S. and Ph.D. in the field of computing technology in education, she has conducted numerous research projects with undergraduate DHH students regarding technology integration, e-learning, technology-supported learning solutions for special population, online education, instructional design and evaluation for blended learning, and learning assessment. |
D1 | Deaf | Faculty Mentor | A postdoctoral researcher, uses hearing aids regularly. His primary language at work and his day-to-day life is ASL, and often communicates in spoken English as well. His current research focuses on leveraging artificial intelligence technologies to develop tools for accessibility for DHH people, and holds a Ph.D. in Psychology with a background in sign language linguistics and language development. He is also a former REU student himself from the summer of 2015, as an undergraduate student. |
D2 | Deaf | Faculty Mentor | A postdoctoral researcher, uses hearing aids, and is fluent in written English and ASL. He has a Ph.D in Computing and Information Sciences. His research lies at the intersection of computer science, human-computer interaction and accessibility. He primarily works on accessibility for the DHH community, and has conducted studies investigating the design and usability of automatic captioning, automatic speech recognition technologies, and other accessible technologies. He is a former REU student (2015) and REU graduate student mentor (2016) of this program. |
D3 | Deaf | Faculty Mentor | A professor and the Director of the Technology Access Program research group. He also co-directs the Accessible Human-Centered Computing graduate program. He has led large accessibility-related federal grants and federal contracts for the past ten years, and also co-directs the REU program. He has strong ties to DHH consumer advocates, and collaborates closely with them to disseminate research findings to policy makers and industry. He holds a Ph.D in Computer Science. |
D4 | Deaf | Faculty Mentor | A Professor and Director of the Information Technology undergraduate program and Accessible Human-Centered Computing graduate program. With over fifteen years of experience in the accessible technology field, he brings a wealth of lived experience and research to the field. He focuses on strategic planning, local industry, alumni relations, and faculty support. He has a Ph.D. in Computer Science and Master of Laws (LLM) in Intellectual Property and Information Law and Juris Doctor (JD) |
3.2.2 Data Analysis.
4 Findings- Common Synchronous Learning Scenarios and Technology Use
4.1 Scenario 1: Attending Large Presentations with Interpreters
4.2 Scenario 2: Leading Small Group Presentations w/wo Interpreters
Common Scenarios | People Count | Occurrence over 10-weeks | Interpreter? | Technologies Used | Non-Verbal Cues | Basic ASL Signs |
In-person Attend Presentations | 10-30 | 10 | Always | Type on Phone | Tap Shoulder | “Yes” “Later” |
In-person Make Presentations | 5-10 | 5 | Sometimes | ASR on Shared Screen and Personal Devices | Point at Screen, Wait for Audience Visual Attention | “Keep Going” “(Don’t) Understand” |
In-person Mentor Undergrads | 2-5 | Daily | Never | ASR on Shared Screen, Digital Pointer/ Highlighter, Type on Digital Sticker | Point at Screen, Tap Shoulder, Shift Head Directions | “Look At” “Say Again” “(Don’t) Understand” |
Hybrid Mentor Undergrads | 2-5 | Bi-Weekly | Never | Type on Zoom Chat, Type on Shared Online Doc, ASR on Zoom, Facetime, Slack Huddle | Wave Hand, Shift Body Directions, Wait for Audience Attention | “Type” “Hold” “Come” “(Don’t) Understand” |
4.3 Scenario 3: Informal F2F Mentoring Sessions without Interpreters
4.4 Scenario 4: Informal Hybrid Mentoring Sessions without Interpreters
5 Findings- Effectiveness of Technology-mediated Communication Strategies
5.1 Overview of the (In)Effectiveness of Three Technology-Mediated Communication Strategies
5.1.1 Directing Visual Attention.
I always take detailed notes on Google Docs and even do that in all-deaf teams when everyone knows ASL. My experience has shown that it greatly reduces miscommunications and misunderstandings about individual and shared responsibilities, irrespective of the mode of communication used. I suspect that it is partly due to the problem of split visual attention, which persists even in an all-ASL environment and is exacerbated by students who easily get distracted. – D3
Directing visual attention works well when the people or technology supports it – such as in Zoom where the window border of the active speaker is bolded. Unfortunately, most attention directing is designed around speakers, not around signers. – D4
5.1.2 Pause-and-Proceed.
“Pause and proceed worked well in **anonymous** when everyone had an intuitive understanding of pausing when people were not paying attention by not looking in proximity. It did not work as well in meetings where some of the audience did not understand this.” – D4
... if visual attention is not maintained, then waving your hand to get attention is a good solution to make sure everyone is on track. Pause-and-proceed methods of capturing attention such as waving hands would work in moderation. If you try to lock eye contact and grab attention 20 times during a 15-minute presentation, for example, that would be excessive. If speaking, it’s probably good to also allow people to “interrupt” you with questions or clarification to help ease any misunderstandings. – D3
... in a group environment, it’s difficult to get everyone’s visual focus as DHH individuals love to sign to each other. With signing, they can easily carry out a side conversation even when they do not sit together. For example, in a classroom, one student can sign to another student sitting on the other side of the room and carry out their own conversation. When this happens, I normally stop my lecturing and wait until one or more students in the class signalled to their peers and demanded them to stop their side conversation and pay attention so the class lecture can continue. – H2
5.1.3 Back-channeling via Expressive Body to Maintain Communication Flow.
5.2 Iterative and Collaborative Reflection on the (In)Effective Strategies
5.2.1 Hearing Not Always Aware of Communications Breakdowns - DHH individuals Explained Various Forms of Breakdowns.
... if I feel like the clarification would require time, then I am inclined not to give an explanation right away – though its not necessarily rude to ask, since sometimes the answer is indeed quick and easy to give. While this is reminiscent of ‘dinner table syndrome’ (deaf being told by hearing ‘its not important’ or ‘I’ll tell you later’) it does practically ask me to give up my own opportunity to gather information during a presentation (especially since as a deaf person I cannot rely on the spoken language interpretation to keep track of the presentation) – D1
There are only bad options: sim-com, which distorts both my speaking and signing; typing, which is slow, or signing and then repeating what I signed by speech, which also is slow and inefficient. – D3
Sign language is my preferred modality and I do not want people to assume I can understand speech or am comfortable speaking – D1
If they continue to speak without trying other forms of communication (typing, etc.) I will continue to indicate “deaf” by pointing to my ear and potentially just walking away. – D2
5.2.2 DHH Individuals Help Hearing Anticipate for Communications Breakdowns.
I think they (Automatic captions) function best in combination with other modalities – for example, if the hearing speaker already has slides ready or visual aids they can point to and comment on the relevant bullet points or diagrams, then reference the relevant parts of the automatic captions that captured the comments (and in doing so, check for themselves that the captions are accurate and check for understanding with team members). This is much better than speaking continuously without materials or checking captions. – D1
Auto-caption apps are growing in use – but they too effectively hide the access problem from hearing people, such that when a communication breakdown occurs, it is harder to repair–D1
One thing about typing on desktop or collaboratively is that I expect still there to be a lot of in-person communication – short responses to typed messages should be done through gesture rather than in typed modality (‘do you understand/ yes I understand’) and anything that the new signer knows how to sign, should be signed. I find the first author didn’t always do this and would stay in the typed modality ‘too much’ and also crucially, not initiate or search for eye contact enough (more so at the beginning of our interactions) – D1
Typing on the phone worked well in casual one-on-one encounters with hearing individuals. I could quickly open the notepad app and since I am a quick typer/texter I can quickly get my ideas across. This method falls apart in group settings. It was not feasible to type on the phone and show it to multiple people, and how would they all respond back to me? On their own phone? Not the best solution.... I did not have a negative experience with collaborative typing (in a shared doc), however... it does not lend well to overlapping conversational voices. (If someone is typing while I am typing I have to stop typing to respond to their typing, and it gets complicated from there) – D2
We (two hearing individuals) used Google shared doc at small meetings with D1, it worked exceptionally well as all three of us are fluent and comfortable with the technology. However, when it was used at a bigger meeting with several deaf people... They started signing as soon as they saw the typed text in the shared doc and did not follow the collaborative typing protocol for turn-taking. – H2
The autocaptions worked well in 1-1 meetings where there was no noise. The autocaptions did not work when there were many people – I did not know who was talking. Also, the autocaptions worked better for some speakers and not so well for other speakers.... Ask me what would help with communication, as I am in this situation most of the time; they’re not. – D4
Auto captioning works well for receptive listening and for engaging with hearing people when I am the only deaf participant. In such scenarios, I am comfortable speaking for myself to respond. However, this breaks down if there are other deaf attendees, because now I have to ensure that they are not left out. I have a deaf accent, so auto captioning does not work reliably for my voice. That means that speaking alone, without the presence of an interpreter, is not workable... The main thing is to be respectful of my communication preferences. Technology is a tool that can be useful, but it needs to be used on my terms. That is, I have to have a say in what technology is used and how.– D3
5.2.3 Power Dynamics and Fixing Communication Breakdown.
The people with less power are more hesitant to tell that they did not understand and to ask to repeat or switch to a different communication modality. The burden then shifts to the person with more power to monitor communication breakdowns and fix them as needed. – D4... situations are more formal when there is a power imbalance, and it can be harder to be flexible in communication strategies – using props, switching between written/typed/signed/spoken modalities (the person in authority is responsible for indicating that being flexible is OK)... Recognizing my role in not just teaching people how to interact with me but with the community at large. My methods should and can be scalable. – D1
6 Discussions
6.1 Design Implication for Inclusive Communication Technology among DHH-Hearing
Common Technology | Preferred Social Scale | Environment Requirements | Benefits for Communication | Possibilities for Communication Breakdowns |
Type on Phone | One to One | Informal | Fast | The other person may not know how to respond |
Speech via Auto Captioning (ASR) | Maximum Three | Quiet | Mostly Accurate | Requires supplementary material for accurate comprehension. Ineffective for certain hearing and DHH individuals. |
Type Collaboratively on Shared Doc | More than Three | Avoid Side Conversation | Very Accurate | Slow. Visual attention to follow conversations is hard. Lack of interactivity when online/hybrid. |
6.1.1 “Typing” as Accurate and Complementary Communication Modality.
6.1.2 Fostering Shared Visual Allocation as “Spotlights” for Turn-Taking.
6.1.3 Ironing The Switch between Technologies.
6.1.4 Making Two-Way Adaptions Visible to Increase Mutual Accommodation in Communication.
6.1.5 Learning Opportunities of New Signers.
6.2 Quick Steps, Messages Left: Towards Co-Creating Inclusiveness and Access
...is expected to put in work and understand that the choices in communication are not necessarily chosen to optimize their ease of communication and understanding – sometimes they will miss information or have to work to communicate clearly or work harder to understand others... The method that might convey the information the ‘quickest’ (speaking alone in our study context) is usually not ideal. – D1
6.3 Limitations and Future Work
6.4 Reflecting on Group Autoethnography Method
7 Conclusions
Acknowledgments
Footnotes
Supplemental Material
- Download
- 10.67 MB
- Transcript
- Download
- 246.25 MB
- Transcript
References
Index Terms
- Towards Co-Creating Access and Inclusion: A Group Autoethnography on a Hearing Individual's Journey Towards Effective Communication in Mixed-Hearing Ability Higher Education Settings
Recommendations
Towards Inclusive Video Commenting: Introducing Signmaku for the Deaf and Hard-of-Hearing
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing SystemsPrevious research underscored the potential of danmaku–a text-based commenting feature on videos–in engaging hearing audiences. Yet, for many Deaf and hard-of-hearing (DHH) individuals, American Sign Language (ASL) takes precedence over English. To ...
Accessible Computer Science for K-12 Students with Hearing Impairments
Universal Access in Human-Computer Interaction. Applications and PracticeAbstractAn inclusive science, technology, engineering and mathematics (STEM) workforce is needed to maintain America’s leadership in the scientific enterprise. Increasing the participation of underrepresented groups in STEM, including persons with ...
Accessible Communication and Materials in Higher Education
ASSETS '22: Proceedings of the 24th International ACM SIGACCESS Conference on Computers and AccessibilityStudents with disabilities face numerous access barriers in higher education institutions. For example, many students struggle to receive the accommodations that they legally have a right to, and many course materials and tools are inaccessible (e.g., ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Sponsors
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
- Research
- Refereed limited
Funding Sources
Conference
Acceptance Rates
Upcoming Conference
- Sponsor:
- sigchi
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 814Total Downloads
- Downloads (Last 12 months)814
- Downloads (Last 6 weeks)161
Other Metrics
Citations
View Options
View options
View or Download as a PDF file.
PDFeReader
View online with eReader.
eReaderHTML Format
View this article in HTML Format.
HTML FormatLogin options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in