Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/1108368.1108415acmotherconferencesArticle/Chapter ViewAbstractPublication PagesozchiConference Proceedingsconference-collections
Article

Dawn explorer: a framework for multimodal accessibility to computer systems

Published: 21 November 2005 Publication History

Abstract

Technology is advancing at a rapid pace, automating many everyday chores in the process, changing the way we perform work and providing various forms of entertainment. Makers of technology, however, often do not consider the needs of the disabled in their design of products by, for example, providing some alternative means of input. The use of computers presents a challenge to many disabled users who are not able to see graphical user interfaces, use a mouse or keyboard or otherwise interact with standard computers. This paper presents a multimodal user interface, emulating and extending the functionality of the Windows Explorer application, with alternative input and output methods. The project utilizes auditory and visual interaction technologies, comprises a modular and extendible architecture and utilises off-the-shelf hardware to reduce implementation cost and maximize accessibility.

References

[1]
Allen, J., Hunnicutt, M. S., and Klatt, D. (1987). From text to speech---the MITalk system. Cambridge, Massachusetts: MIT Press.
[2]
Apple Computer. (1995). Macintosh human interface guidelines. Addison Wesley, Cambridge, MA
[3]
Australian Government. (2004). Department of Family and Community Services: Commonwealth Disability Strategy website. Retrieved June 5, 2005, from http://www.facs.gov.au/internet/facsinternet.nsf/disabilities/nav.htm
[4]
Bartkova, K. and Sorin, C. (1987). A model of segmental duration for speech synthesis in French. Speech Communication, 6. Retrieved May 14, 2005, from http://epos.ure.cas.cz/publications/2001/chap33.pdf 245--260.
[5]
Carlson, R. and Granström, B. (1986). A search for durational rules in a real-speech data base. Phonetica, 43. 140--154.
[6]
Eurostat. (1995). Europa in zahlen: Ausgabe 4. Brussels, Belgium: Office for Official Publications of the European Communities
[7]
Fisher, W., Doddington, G., and Goudie-Marshall, K. (1986). The DARPA speech recognition research database: Specifications and status. In Proceedings of the DARPA Speech Recognition Workshop (pp. 93--99). California: DARPA Speech Recognition Workshop, SAIC-86/1546.
[8]
Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., Pallett, D. S., and Dahlgren, N. L. (1993). The DARPA TIMIT acoustic-phonetic continuous speech corpus. CDROM: NTIS order number PB91-100354.
[9]
Hirschmann, L. (1992). Multi-site data collection for a spoken language corpus. In Proceedings of the Fifth DARPA Speech and Natural Language Workshop (pp. 16--19). Newark, Delaware: DARPA Speech and Natural Language Workshop.
[10]
Howell, W. C. (1997). Handbook of human factors and the older adult. Boston, MA: Academic Press.
[11]
International Organization for Standardisation (ISO). (1998). ISO 9241 Ergonomic requirements for office work with visual display terminals (VDTs), Part 11 Guidance on Usability. International Standard, Geneva, Switzerland
[12]
International Organization for Standardisation (ISO). (2002). ISO 16071 Ergonomics of human-system interaction---guidance on software accessibility. Technical Specification, Geneva, Switzerland
[13]
Joseph Rowntree Foundation. (2004). Findings: Does the Internet open up opportunities for disabled people?. Retrieved May 24, 2005, from http://www.jrf.org.uk/knowledge/findings/socialcare/pdf/524.pdf
[14]
Kaufmann, M. (1990). Defense Advanced Research Projects Agency. In Proceedings of the Third DARPA Speech and Natural Language Workshop (pp. 102--105). Pennsylvania: DARPA Speech and Natural Language Workshop.
[15]
Kaufmann, M., and Price, P. (1990). Evaluation of spoken language systems: The ATIS domain. In Proceedings of the Third DARPA Speech and Natural Language Workshop, (pp. 310--313). Pennsylvania: DARPA Speech and Natural Language Workshop.
[16]
Klatt, D. H. (1987). Review of text-to-speech conversion for English. Journal of the Acoustical Society of America, 82(3). American Institute of Physics. 737--793.
[17]
Lamel, L. F., Kassel, R. H., and Seneff S. (1986). Speech database development: Design and analysis of the acoustic-phonetic corpus. In Proceedings of the DARPA Speech Recognition Workshop (pp. 100--109). California: DARPA Speech Recognition Workshop, SAIC-86/1546.
[18]
Mynatt, E. D., and Edwards, W. K. (1992). Mapping GUIs to auditory interfaces. {Electronic version}. Proceedings of the 5th annual ACM symposium on User interface software and technology (pp. 61--70). California: Symposium on User Interface Software and Technology.
[19]
Nielsen J. (1993). Usability engineering. Boston, MA: Academic Press.
[20]
Paul, D., and Baker J. (1992). The design for the Wall Street Journal-based CSR corpus. In Proceedings of the Fifth DARPA Speech and Natural Language Workshop (pp. 74--77). Newark, Delaware: DARPA Speech and Natural Language Workshop.
[21]
Price, P., Fisher, W. M., Bernstein, J., and Pallett, D. S. (1988). The DARPA 1000-word resource management database for continuous speech recognition. In Proceedings of the 1988 International Conference on Acoustics, Speech, and Signal Processing (pp. 651--654). New York: Institute of Electrical and Electronic Engineers.
[22]
Ramstein, C., Martial, O., Dufresne, A., Carignan, M., Chasse, P., and Mabilleau, P. (1996). Touching and hearing GUI's: design issues for the PC-Access system. {Electronic version}. In Proceedings of the second annual ACM conference on Assistive technologies (pp. 2--9). Vancouver: ACM SIGCAPH Conference on Assistive Technologies.
[23]
Scherer, M. J. (2000). Living in the State of Stuck: How Technology Impacts the Lives of People with Disabilities, Third Edition. Cambridge, MA: Retrieved June 7, 2005, from http://www.brooklinebooks.com/disabilities/disindex.htm
[24]
Scherer, M. J. (1998). Matching Person & Technology Model and Accompanying Assessment Instruments. Webster, NY: Retrieved June 7, 2005, from http://members.aol.com/impt97/mpt.html
[25]
Scherer, M. J. (Ed.). (2002). Assistive Technology: Matching Device and Consumer for Successful Rehabilitation. Washington, DC: Retrieved June 7, 2005, from http://www.apa.org/books/431667a.html
[26]
Stephanidis C. (2001). User interfaces for all---concepts, methods and tools. Mahwah, NJ: Lawrence Erlbaum.
[27]
Taylor and Francis Group. (2002). Special Issue on Assistive Technology: Disability & Rehabilitation. Retrieved June 7, 2005, from http://www.tandf.co.uk/
[28]
Tominaga, H., and Yonekura, T. (1999). A Proposal of an Auditory Interface for the Virtual Three-Dimensional Space. {Electronic version}. Systems and Computers in Japan, 30(11). 77--84
[29]
U.S. Department of Education. (2005). ABLEDATA: Assistive Technology Information: Online Database. Retrieved May 24, 2005, from http://www.abledata.com
[30]
Umeda, N. (1975). Vowel duration in American English. Journal of the Acoustical Society of America, 58(2). American Institute of Physics. 434--445.
[31]
US Census Bureau. (1997). Americans with Disabilities: 1997. Retrieved November 03, 2004, from www.census.gov/hhes/-www/disable/sipp/disable97.html
[32]
World Health Organization. (2001). International Classification of Functioning, Disability and Health. Geneva, Switzerland: Author. Retrieved June 7, 2005, from http://www.who.int/inf-pr-1999/en/note99-19.html

Index Terms

  1. Dawn explorer: a framework for multimodal accessibility to computer systems
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    OZCHI '05: Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
    November 2005
    431 pages
    ISBN:1595932224

    Publisher

    Computer-Human Interaction Special Interest Group (CHISIG) of Australia

    Narrabundah, Australia

    Publication History

    Published: 21 November 2005

    Check for updates

    Author Tags

    1. GUI
    2. audio
    3. auditory interfaces
    4. blind
    5. disability
    6. human-computer interaction
    7. interface models
    8. multimodal interfaces
    9. rehabilitation engineering
    10. users with special needs
    11. visual impairment

    Qualifiers

    • Article

    Conference

    OZCHI '05
    OZCHI '05: Computer-Human Interaction
    November 21 - 25, 2005
    Canberra, Australia

    Acceptance Rates

    Overall Acceptance Rate 362 of 729 submissions, 50%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 367
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 27 Nov 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media