Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Accessibility in Action: Co-Located Collaboration among Deaf and Hearing Professionals

Published: 01 November 2018 Publication History

Abstract

Although accessibility in academic and professional workplaces is a well-known issue, understanding how teams with different abilities communicate and coordinate in technology-rich workspaces is less well understood. When hearing people collaborate around computers, they rely on the ability to simultaneously see and hear as they start a shared document, talk to each other while editing, and gesture towards the screen. This interaction norm breaks down for teams of people with different sensory abilities, such as Deaf and hearing collaborators, who rely on visual communication. Through interviews and observations, we analyze how Deaf-hearing teams collaborate on a variety of naturalistic tasks. Our findings reveal that Deaf-hearing teams create accessibility through their moment-to-moment co-located interaction and emerging team practices over time. We conclude with a discussion of how studying co-located Deaf-hearing interaction extends our understanding of accessibility in mixed-ability teams and provides new insights for groupware systems.

References

[1]
Michael Argyle and Mark Cook. 1976. Gaze and mutual gaze. Cambridge University Press, Cambridge.
[2]
Charlotte Baker. 1977. Regulators and turn-taking in American Sign Language discourse. In On the Other Hand: New Perspectives on American Sign Language. Academic Press, 215--236.
[3]
Charlotte Baker and Carol Padden. 1978. Focusing on the nonmanual components of American Sign Language. In Understanding Language through Sign Language Research. Academic Press, New York, 27--57.
[4]
Steven Barnett. 2002. Communication with Deaf and Hard-of-hearing People: A Guide for Medical Education. Acad. Med. 77, 7 (2002), 694--700.
[5]
Dirksen L. Bauman and Joseph M. Murray. 2009. Reframing: From Hearing Loss to Deaf Gain. Deaf Stud. Digit. J. 1 (2009).
[6]
Larwan Berke, Christopher Caulfield, and Matt Huenerfauth. 2017. Deaf and Hard-of-Hearing Perspectives on Imperfect Automatic Speech Recognition for Captioning One-on-One Meetings. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17), 155--164.
[7]
Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010. VizWiz: Nearly Real-time Answers to Visual Questions. In Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology (UIST '10), 333--342.
[8]
Danielle Bragg, Nicholas Huynh, and Richard E Ladner. 2016. A Personalizable Mobile Sound Detector App Design for Deaf and Hard-of-Hearing Users. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16), 3--13.
[9]
Stacy M Branham and Shaun K. Kane. 2015. Collaborative Accessibility: How Blind and Sighted Companions Co-Create Accessible Home Spaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15), 2373--2382.
[10]
Stacy M Branham and Shaun K Kane. 2015. The Invisible Work of Accessibility: How Blind Employees Manage Accessibility in Mixed-Ability Workplaces. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '15), 163--171.
[11]
Charlotte Baker-Shenk and Dennis Cokely. 1980. Pidgin Sign English in the Deaf Community. In American Sign Language. Washington, D.C.: Clerc Books, 73.
[12]
Kathy Charmaz. 2008. Constructionism and the Grounded Theory Method. In Handbook of Constructionist Research. 397--412.
[13]
Kathy Charmaz. 2014. Constructing Grounded Theory. Sage Publications, London.
[14]
Goedele A. M. De Clerck and Peter V. Paul (Eds.). 2016. Sign Language, Sustainable Development, and Equal Opportunities: Envisioning the Future for Deaf Students. Gallaudet University Press.
[15]
Michael Crabb, Rhianne Jones, Mike Armstrong, and Chris J Hughes. 2015. Online News Videos: The UX of Subtitle Position. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '15), 215--222.
[16]
Starkey Duncan. 1972. Some signals and rules for taking speaking turns in conversations. J. Pers. Soc. Psychol. 23, 2 (1972), 283--292.
[17]
Starkey Duncan. 1974. On the Structure of Speaker-Auditor Interaction during Speaking Turns. Lang. Soc. 3, 2 (1974), 161--180.
[18]
Starkey Duncan and Donald W Fiske. 1977. Face-to-Face Interaction: Research, Methods, and Theory. Lawrence Erlbaum Associates, Hilldale, NJ.
[19]
David A. Ebert and Paul S. Heckerling. 1995. Communication With Deaf Patients Knowledge, Beliefs, and Practices of Physicians. JAMA 273, 3 (1995), 227--229.
[20]
Karen Emmorey, Robin Thompson, and Rachael Colvin. 2009. Eye Gaze During Comprehension of American Sign Language by Native and Beginning Signers. J. Deaf Stud. Deaf Educ. 14, 2 (2009), 237--243.
[21]
Elisabeth Engberg-Pedersen. 2003. From Pointing to Reference and Predication: Pointing Signs, Eyegaze, and Head and Body Orientation in Danish Sign Language. In Pointing: Where Language, Culture, and Cognition Meet. Lawrence Erlbaum Associates, Mahwah, NJ, 269--292.
[22]
Alexander Fiannaca, Ann Paradiso, Mira Shah, and Meredith Ringel Morris. 2017. AACrobat: Using Mobile Devices to Lower Communication Barriers and Provide Autonomy with Gaze-Based AAC. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17), 683--695.
[23]
Michele Friedner and Annelies Kusters. 2015. It's a Small World: International Deaf Spaces and Encounters. Gallaudet University Press.
[24]
Gallaudet University. DeafSpace. Retrieved September 19, 2017 from http://www.gallaudet.edu/campus-design-and-planning/deafspace
[25]
Ge Gao, Bin Xu, David C Hau, Zheng Yao, Dan Cosley, and Susan R Fussell. 2015. Two is Better Than One: Improving Multilingual Collaboration by Giving Two Machine Translation Outputs. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '15), 852--863.
[26]
Ge Gao, Naomi Yamashita, Ari M J Hautasaari, and Susan R Fussell. 2015. Improving Multilingual Collaboration by Displaying How Non-native Speakers Use Automated Transcripts and Bilingual Dictionaries. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15), 3463--3472.
[27]
Carrie Lou Garberoglio, Stephanie Cawthon, and Mark Bond. 2016. Deaf People and Employment in the United States: 2016. Washington, DC. Retrieved from https://www.nationaldeafcenter.org/resource/deaf-people-and-employment-united-states-2016
[28]
Darren Gergle, Robert E Kraut, and Susan R Fussell. 2004. Action As Language in a Shared Visual Space. In Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work (CSCW '04), 487--496.
[29]
Abraham Glasser, Kesavan Kushalnagar, and Raja Kushalnagar. 2017. Deaf, Hard of Hearing, and Hearing Perspectives on Using Automatic Speech Recognition in Conversation. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17), 427--432.
[30]
Charles Goodwin. 1981. Conversational Organization: Interaction Between Speakers and Hearers. Academic Press, New York.
[31]
Charles Goodwin. 2000. Practices of Seeing Visual Analysis: An Ethnomethodological Approach. In The Handbook of Visual Analysis, Theo Van Leeuwen and Carey Jewitt (eds.). SAGE Publicaiton, London, 157--182.
[32]
Charles Goodwin. 2000. Action and embodiment within situated human interaction. J. Pragmat. 32, 10 (2000), 1489--1522.
[33]
Charles Goodwin. 2006. Human Sociality as Mutual Orientation in a Rich Interactive Environment: Multimodal Utterances and Pointing in Aphasia. In Roots of Human Sociality, Nicholas J Enfield and Stephen C Levinson (eds.). Berg, London, 96--125.
[34]
Charles Goodwin. 2006. Interactive Footing. In Reporting Talk: Reported Speech in Interaction. Cambridge University Press, 16--46.
[35]
Charles Goodwin. 2007. Environmentally Coupled Gestures. In Gesture and the Dynamic Dimension of Language, Susan D. Duncan, Justine Cassell and Elena T. Levy (eds.). John Benjamins Publishing Company, Amsterdam/Philadelpha, 195--212.
[36]
Charles Goodwin and Marjorie Harness Goodwin. 1998. Seeing as a situated activity: Formulating planes. In Cognition and Communication at Work, Yrjö Engeström and David Middleton (eds.). Cambridge University Press, 61--95.
[37]
Mara Green. 2015. One Language, or Maybe Two: Direct Communication, Understanding, and Informal Interpreting in International Deaf Encounters. In It's a Small World: International Deaf Spaces and Encounters, Michele Friedner and Annelies Kusters (eds.). Gallaudet University Press, 70--82.
[38]
Jan Gugenheimer, Katrin Plaumann, Florian Schaub, Patrizia Di Campli San Vito, Saskia Duck, Melanie Rabus, and Enrico Rukzio. 2017. The Impact of Assistive Technology on Communication Quality Between Deaf and Hearing Individuals. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17), 669--682.
[39]
Marianne Gullberg and Kenneth Holmqvist. 2006. What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video. Pragmat. Cogn. 14, 1 (2006), 53--82.
[40]
Steve Harrison and Paul Dourish. 1996. Re-place-ing Space: The Roles of Place and Space in Collaborative Systems. In Proceedings of the 1996 ACM Conference on Computer Supported Cooperative Work (CSCW '96), 67--76.
[41]
Christian Heath, Jon Hindmarsh, and Paul Luff. 2010. Video in Qualitative Research. Sage Publications.
[42]
Christian Heath and Paul Luff. 1992. Collaboration and Control: Crisis Management and Multimedia technology in London Underground Line Control Rooms. Comput. Support. Coop. Work 1, 1 (1992), 69--94.
[43]
Christian Heath and Paul Luff. 1996. Convergent activities: collaborative work and multimedia technology in London Underground Line Control Rooms. In Cognition and Communication at Work, Yrjo Engeström and David Middleton (eds.). Cambridge University Press, Cambridge, 96--129.
[44]
Christian Heath and Paul Luff. 2000. Technology in Action. Cambridge University Press.
[45]
William C Hill, James D Hollan, Dave Wroblewski, and Tim McCandless. 1992. Edit Wear and Read Wear. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '92), 3--9.
[46]
Jon Hindmarsh and Christian Heath. 2000. Embodied reference: A study of deixis in workplace interaction. J. Pragmat. 32, 12 (2000), 1855--1878.
[47]
James Hollan, Edwin Hutchins, and David Kirsh. 2000. Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research. ACM Trans. Comput. Interact. 7, 2 (June 2000), 174--196.
[48]
Edwin Hutchins. 1995. How a Cockpit Remembers Its Speeds. Cogn. Sci. 19, 3 (1995), 265--288.
[49]
Seray B Ibrahim, Asimina Vasalou, and Michael Clarke. 2018. Design Opportunities for AAC and Children with Severe Speech and Physical Impairments. In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems (CHI '18).
[50]
Lisa I. Iezzoni, Bonnie L. O'Day, Mary B. Killeen, and Heather Harker. 2004. Communicating about Health Care: Observations from Persons Who Are Deaf or Hard of Hearing. Ann. Intern. Med. 140, 5 (2004), 356--362.
[51]
Hiroshi Ishii and Minoru Kobayashi. 1992. ClearBoard: A Seamless Medium for Shared Drawing and Conversation with Eye Contact. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '92), 525--532.
[52]
Hiroshi Ishii, Minoru Kobayashi, and Jonathan Grudin. 1992. Integration of Inter-personal Space and Shared Workspace: ClearBoard Design and Experiments. In Proceedings of the 1992 ACM Conference on Computer-supported Cooperative Work (CSCW '92), 33--42.
[53]
Dhruv Jain, Leah Findlater, Jamie Gilkeson, Benjamin Holland, Ramani Duraiswami, Dmitry Zotkin, Christian Vogler, and Jon E. Froehlich. 2015. Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15.
[54]
Jennifer Johnson. 2012. Rearticulating culture in a place in-between: Exploring the multimodal experiences of hearing mothers and their deaf children. Retrieved from http://blc.berkeley.edu/2012/10/04/rearticulating_culture_in_a_place_in-between_exploring_the_multimodal_exper/
[55]
Hernisa Kacorri, Matt Huenerfauth, Sarah Ebling, Kasmira Patel, and Mackenzie Willard. 2015. Demographic and Experiential Factors Influencing Acceptance of Sign Language Animation by Deaf Users. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '15), 147--154.
[56]
Sushant Kafle and Matt Huenerfauth. 2017. Evaluating the Usability of Automatically Generated Captions for People Who Are Deaf or Hard of Hearing. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17), 165--174.
[57]
Shaun K. Kane, Meredith Ringel Morris, Ann Paradiso, and Jon Campbell. 2017. 'At times avuncular and cantankerous, with the reflexes of a mongoose": Understanding Self-Expression through Augmentative and Alternative Communication Devices. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '2016), 1166--1179.
[58]
Shaun K Kane, Chandrika Jayant, Jacob O Wobbrock, and Richard E Ladner. 2009. Freedom to Roam: A Study of Mobile Device Adoption and Accessibility for People with Visual and Motor Disabilities. In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '09), 115--122.
[59]
Saba Kawas, George Karalis, Tzu Wen, and Richard E Ladner. 2016. Improving Real-Time Captioning Experiences for Deaf and Hard of Hearing Students. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16), 15--23.
[60]
Chris L Kleinke. 1986. Gaze and eye contact: a research review. Psychol. Bull. 100, 1 (1986), 78--100.
[61]
Raja Kushalnagar, Poorna Kushalnagar, and Gianni Manganelli. 2012. Collaborative gaze cues for deaf students. (2012).
[62]
Raja S Kushalnagar, Gary W Behm, Aaron W Kelstone, and Shareef Ali. 2015. Tracked Speech-To-Text Display: Enhancing Accessibility and Readability of Real-Time Speech-To-Text. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '15), 223--230.
[63]
Raja S Kushalnagar, Gary W Behm, Joseph S Stanislow, and Vasu Gupta. 2014. Enhancing Caption Accessibility Through Simultaneous Multimodal Information: Visual-tactile Captions. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '14), 185--192.
[64]
Raja S Kushalnagar, Walter S Lasecki, and Jeffrey P Bigham. 2014. Accessibility Evaluation of Classroom Captions. ACM Trans. Access. Comput. 5, 3 (January 2014), 7:1--7:24.
[65]
Paddy Ladd. 2003. Understanding Deaf Culture: In Search of Deafhood. Multilingual Matters, Bristol, UK.
[66]
Walter Lasecki, Christopher Miller, Adam Sadilek, Andrew Abumoussa, Donato Borrello, Raja Kushalnagar, and Jeffrey Bigham. 2012. Real-time Captioning by Groups of Non-experts. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST '12), 23--34.
[67]
Walter S Lasecki, Raja Kushalnagar, and Jeffrey P Bigham. 2014. Helping Students Keep Up with Real-time Captions by Pausing and Highlighting. In Proceedings of the 11th Web for All Conference (W4A '14), 39:1--39:8.
[68]
Kehuang Li, Zhengyu Zhou, and Chin-Hui Lee. 2016. Sign Transition Modeling and a Scalable Solution to Continuous Sign Language Recognition for Real-World Applications. ACM Trans. Access. Comput. 8, 2 (January 2016), 7:1--7:23.
[69]
Wendy E MacKay. 1999. Is Paper Safer? The Role of Paper Flight Strips in Air Traffic Control. ACM Trans. Comput. Interact. 6, 4 (December 1999), 311--340.
[70]
James R Mallory, Michael Stinson, Lisa Elliot, and Donna Easton. 2017. Personal Perspectives on Using Automatic Speech Recognition to Facilitate Communication Between Deaf Students and Hearing Customers. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17), 419--421.
[71]
Helena M Mentis. 2017. Collocated Use of Imaging Systems in Coordinated Surgical Practice. Proc. ACM Hum.-Comput. Interact. 1, CSCW (December 2017), 78:1--78:17.
[72]
Helena M Mentis, Kenton O'Hara, Abigail Sellen, and Rikin Trivedi. 2012. Interaction Proxemics and Image Use in Neurosurgery. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12), 927--936.
[73]
Helena M Mentis, Ahmed Rahim, and Pierre Theodore. 2016. Crafting the Image in Surgical Telemedicine. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16), 744--755.
[74]
Helena M Mentis and Alex S Taylor. 2013. Imaging the Body: Embodied Vision in Minimally Invasive Surgery. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13), 1479--1488.
[75]
Dorian Miller, James Culp, and David Stotts. 2006. Facetop Tablet:: Note-taking Assistance for Deaf Persons. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (Assets '06), 247--248.
[76]
National Center for Education Statistics. 2017. Disability Rates and Employment Status by Educational Attainment. Retrieved April 17, 2018 from https://nces.ed.gov/programs/coe/indicator_tad.asp
[77]
National Center for Science and Engineering. 2017. Women, Minorities, and Persons with Disabilities in Science and Engineering: 2017. Arlington, VA. Retrieved from www.nsf.gov/statistics/wmpd/
[78]
Carol A Padden and Tom Humphries. 1990. Deaf in America. Harvard University Press.
[79]
Carol Padden and Tom Humphries. 2009. Inside Deaf Culture. Harvard University Press.
[80]
Anne Marie Piper and James D Hollan. 2008. Supporting Medical Conversations Between Deaf and Hearing Individuals with Tabletop Displays. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work (CSCW '08), 147--156.
[81]
Anne Marie Piper and James D Hollan. 2009. Analyzing Multimodal Communication around a Shared Tabletop Display. In ECSCW 2009, Ina Wagner, Hilda Telliolu, Ellen Balka, Carla Simone and Luigina Ciolfi (eds.). Springer London, London, 283--302.
[82]
Olof Sandgren, Richard Andersson, Joost van de Weijer, Kristina Hansson, and Birgitta Sahlén. 2014. Coordination of Gaze and Speech in Communication Between Children With Hearing Impairment and Normal-Hearing Peers. J. Speech, Lang. Hear. Res. 57, 3 (2014), 942--951.
[83]
Kristen Shinohara and Jacob O Wobbrock. 2011. In the Shadow of Misperception: Assistive Technology Use and Social Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11), 705--714.
[84]
Brent N Shiver and Rosalee J Wolfe. 2015. Evaluating Alternatives for Better Deaf Accessibility to Selected Web-Based Multimedia. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '15), 231--238.
[85]
Liu Sicong, Zhou Zimu, Du Junzhao, Shangguan Longfei, Jun Han, and Xin Wang. 2017. UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-hearing People with Smartphones. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 2 (June 2017), 17:1--17:21.
[86]
Patricia Siple. 1978. Visual Constraints for Sign Language Communication. Sign Lang. Stud. 19, 1 (1978), 95--110.
[87]
Kiley Sobel, Alexander Fiannaca, Jon Campbell, Harish Kulkarni, Ann Paradiso, Ed Cutrell, and Meredith Ringel Morris. 2017. Exploring the Design Space of AAC Awareness Displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17), 2890--2903.
[88]
David Stotts, Jason McC. Smith, and Karl Gyllstrom. 2004. FaceSpace: Endo- and Exo-spatial Hypermedia in the Transparent Video Facetop. In Proceedings of the Fifteenth ACM Conference on Hypertext and Hypermedia (HYPERTEXT '04), 48--57.
[89]
Jürgen Streeck, Charles Goodwin, and Curtis LeBaron. 2011. Embodied Interaction in the Material World: An Introduction. In Embodied Interaction: Language and Body in the Material World. Cambridge University Press Cambridge, England, 1--26.
[90]
Lucy A Suchman. 1987. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press.
[91]
Dianne M Toe and Louise E Paatsch. 2010. The Communication Skills Used by Deaf Children and Their Hearing Peers in a Question-and-Answer Game Context. J. Deaf Stud. Deaf Educ. 15, 3 (2010), 228.
[92]
United States Department of Justice Civil Rights. Information and Technical Assistance on the Americans with Disabilities Act. Retrieved April 15, 2018 from www.ada.gov
[93]
Clayton Valli and Ceil Lucas. 2000. Linguistics of American Sign Language: An Introduction. Gallaudet University Press.
[94]
Belinda G. Vicars. Meeting and Interacting with Deaf people: 'When and how to approach a Deaf person." Retrieved April 16, 2018 from http://www.lifeprint.com/asl101/topics/meeting-deaf-people.htm
[95]
Ed.D. William Vicars. Attention Getting Techniques. Retrieved April 16, 2018 from http://www.lifeprint.com/asl101/topics/attention_getting_techniques.htm
[96]
Ed.D. William Vicars. 'Classifiers" American Sign Language (ASL). Retrieved April 16, 2018 from http://www.lifeprint.com/asl101/pages-signs/classifiers/classifiers-main.htm
[97]
Michele A Williams, Caroline Galbraith, Shaun K Kane, and Amy Hurst. 2014. 'Just Let the Cane Hit It": How the Blind and Sighted See Navigation Differently. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '14), 217--224.
[98]
Peter Wittenburg, Hennie Brugman, Albert Russel, Alex Klassmann, and Han Sloetjes. 2006. ELAN: a professional framework for multimodality research. In 5th International Conference on Language Resources and Evaluation (LREC 2006), 1556--1559.
[99]
Yu-Han Xie, Milon Potmesil, and Brenda Peters. 2014. Children Who Are Deaf or Hard of Hearing in Inclusive Educational Settings: A Literature Review on Interactions With Peers. J. Deaf Stud. Deaf Educ. 19, 4 (2014), 423--437.
[100]
Chien Wen Yuan, Benjamin V Hanrahan, Sooyeon Lee, Mary Beth Rosson, and John M Carroll. 2017. I Didn't Know That You Knew I Knew: Collaborative Shopping Practices Between People with Visual Impairment and People with Vision. Proc. ACM Hum.-Comput. Interact. 1, CSCW (December 2017), 1--18.
[101]
Annuska Zolyomi, Anne Spencer Ross, Arpita Bhattacharya, Lauren Milne, and Sean Munson. 2017. Value Sensitive Design for Neurodiverse Teams in Higher Education. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17), 353--354.
[102]
Annuska Zolyomi, Anne Spencer Ross, Arpita Bhattacharya, Lauren Milne, and Sean Munson. 2018. Values, Identity, and Social Translucence: Neurodiverse Student Teams in Higher Education. In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems (CHI '18).
[103]
Sacha Zyto, David Karger, Mark Ackerman, and Sanjoy Mahajan. 2012. Successful classroom deployment of a social document annotation system. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI '12), 1883.

Cited By

View all
  • (2024)Why is Accessibility So Hard? Insights From the History of PrivacyCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681876(362-368)Online publication date: 11-Nov-2024
  • (2024)Help and The Social Construction of Access: A Case-Study from IndiaProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675606(1-12)Online publication date: 27-Oct-2024
  • (2024)Neurodiversity and the Accessible University: Exploring Organizational Barriers, Access Labor and Opportunities for ChangeProceedings of the ACM on Human-Computer Interaction10.1145/36410118:CSCW1(1-27)Online publication date: 26-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 2, Issue CSCW
November 2018
4104 pages
EISSN:2573-0142
DOI:10.1145/3290265
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 November 2018
Published in PACMHCI Volume 2, Issue CSCW

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. accessibility
  2. deafness
  3. group work
  4. video analysis

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)254
  • Downloads (Last 6 weeks)35
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Why is Accessibility So Hard? Insights From the History of PrivacyCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681876(362-368)Online publication date: 11-Nov-2024
  • (2024)Help and The Social Construction of Access: A Case-Study from IndiaProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675606(1-12)Online publication date: 27-Oct-2024
  • (2024)Neurodiversity and the Accessible University: Exploring Organizational Barriers, Access Labor and Opportunities for ChangeProceedings of the ACM on Human-Computer Interaction10.1145/36410118:CSCW1(1-27)Online publication date: 26-Apr-2024
  • (2024)Communication, Collaboration, and Coordination in a Co-located Shared Augmented Reality Game: Perspectives From Deaf and Hard of Hearing PeopleProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642953(1-14)Online publication date: 11-May-2024
  • (2024)"Speech is Silver, Silence is Golden " Analyzing Micro-communication Strategies between Visually Impaired Runners and their GuidesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642388(1-16)Online publication date: 11-May-2024
  • (2024)Towards Co-Creating Access and Inclusion: A Group Autoethnography on a Hearing Individual's Journey Towards Effective Communication in Mixed-Hearing Ability Higher Education SettingsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642017(1-14)Online publication date: 11-May-2024
  • (2023)Understanding Participation among Disabled Creators in Online MarketplacesProceedings of the ACM on Human-Computer Interaction10.1145/36101057:CSCW2(1-28)Online publication date: 4-Oct-2023
  • (2023)Understanding Social and Environmental Factors to Enable Collective Access Approaches to the Design of Captioning TechnologyACM SIGACCESS Accessibility and Computing10.1145/3584732.3584735(1-1)Online publication date: 15-Feb-2023
  • (2023)Accessibility Barriers, Conflicts, and Repairs: Understanding the Experience of Professionals with Disabilities in Hybrid MeetingsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581541(1-15)Online publication date: 19-Apr-2023
  • (2023)Community-Driven Information Accessibility: Online Sign Language Content Creation within d/Deaf CommunitiesProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581286(1-24)Online publication date: 19-Apr-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media