Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3025453.3025779acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

ViVo: Video-Augmented Dictionary for Vocabulary Learning

Published: 02 May 2017 Publication History

Abstract

Research on Computer-Assisted Language Learning (CALL) has shown that the use of multimedia materials such as images and videos can facilitate interpretation and memorization of new words and phrases by providing richer cues than text alone. We present ViVo, a novel video-augmented dictionary that provides an inexpensive, convenient, and scalable way to exploit huge online video resources for vocabulary learning. ViVo automatically generates short video clips from existing movies with the target word highlighted in the subtitles. In particular, we apply a word sense disambiguation algorithm to identify the appropriate movie scenes with adequate contextual information for learning. We analyze the challenges and feasibility of this approach and describe our interaction design. A user study showed that learners were able to retain nearly 30% more new words with ViVo than with a standard bilingual dictionary days after learning. They preferred our video-augmented dictionary for its benefits in memorization and enjoyable learning experience.

Supplementary Material

suppl.mov (pn2609-file3.mp4)
Supplemental video
suppl.mov (pn2609p.mp4)
Supplemental video

References

[1]
Akinnaso, F. N. On the differences between spoken and written language. Language and speech 25, 2 (1982), 97--125.
[2]
Amemiya, S., Hasegawa, K., Kaneko, K., Miyakoda, H., and Tsukahara, W. Long-term memory of foreign-word learning by short movies for ipods. In Advanced Learning Technologies, 2007. ICALT 2007. Seventh IEEE International Conference on, IEEE (2007), 561--563.
[3]
Bhingardive, S., Singh, D., V, R., Redkar, H., and Bhattacharyya, P. Unsupervised most frequent sense detection using word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, ACL (2015), 1238--1243.
[4]
Cai, C. J., Guo, P. J., Glass, J. R., and Miller, R. C. Wait-learning: Leveraging wait time for second language education. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, ACM (2015), 3701--3710.
[5]
Dang, T.-D., Chen, G.-D., Dang, G., Li, L.-Y., et al. Rolo: A dictionary interface that minimizes extraneous cognitive load of lookup and supports incidental and incremental learning of vocabulary. Computers & Education 61 (2013), 251--260.
[6]
Dearman, D., and Truong, K. Evaluating the implicit acquisition of second language vocabulary using a live wallpaper. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2012), 1391--1400.
[7]
DeVito, J. A. Levels of abstraction in spoken and written language. Journal of Communication 17, 4 (1967), 354--361.
[8]
Ebbinghaus, H. Memory: A contribution to experimental psychology. No. 3. University Microfilms, 1913.
[9]
Edge, D., Cheng, K.-Y., and Whitney, M. Spatialease: Learning language through body motion. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, ACM (2013), 469--472.
[10]
Edge, D., Searle, E., Chiu, K., Zhao, J., and Landay, J. A. Micromandarin: Mobile language learning in context. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, ACM (2011), 3169--3178.
[11]
Fast, E., Chen, B., and Bernstein, M. S. Empath: Understanding topic signals in large-scale text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, ACM (2016), 4647--4657.
[12]
Han, C.-H., Yang, C.-L., and Wang, H.-C. Supporting second language reading with picture note-taking. In CHI '14 Extended Abstracts on Human Factors in Computing Systems, CHI EA '14, ACM (2014), 2245--2250.
[13]
Harris, Z. S. Distributional structure. Word 10, 2--3 (1954), 146--162.
[14]
Kayama, T., Kaneko, K., Miyakoda, H., and Ishikawa, M. Effective materials for abstract words in foreign vocabulary learning. In Wireless, Mobile and Ubiquitous Technologies in Education (WMUTE), 2010 6th IEEE International Conference on, IEEE (2010), 207--209.
[15]
Kovacs, G., and Miller, R. C. Smart subtitles for vocabulary learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, ACM (2014), 853--862.
[16]
Lesk, M. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of the 5th annual international conference on Systems documentation, ACM (1986), 24--26.
[17]
Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S. J., and McClosky, D. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations (2014), 55--60.
[18]
Matthiesen, S. J. Essential Words for the TOEFL. Barron's Educational Series, 2014.
[19]
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (2013), 3111--3119.
[20]
Miyakoda, H., Kaneko, K.-i., and Ishikawa, M. Effective learning material for mobile devices. JLCL 26, 1 (2011), 39--51.
[21]
Paivio, A. Dual coding theory: Retrospect and current status. Canadian Journal of Psychology/Revue canadienne de psychologie 45, 3 (1991), 255.
[22]
Raine, P. Incidental learning of vocabulary through authentic subtitled videos. JALT-The Japan Association for Language Teaching (2012).
[23]
Sakunkoo, N., and Sakunkoo, P. Gli?ix: Using movie subtitles for language learning. In UIST 2013 Adjunct, ACM (2013).
[24]
Savva, M., Chang, A. X., Manning, C. D., and Hanrahan, P. Transphoner: Automated mnemonic keyword generation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, ACM (2014), 3725--3734.
[25]
Secules, T., Herron, C., and Tomasello, M. The effect of video context on foreign language learning. The Modern Language Journal 76, 4 (1992), 480--490.
[26]
Takigiku, Y., Kaneko, K., Ishikawa, M., and Miyakoda, H. Short movie materials based on tessellation for foreign vocabulary learning. In Information Technology Based Higher Education and Training (ITHET), 2010 9th International Conference on, IEEE (2010), 284--291.
[27]
Toutanova, K., Klein, D., Manning, C. D., and Singer, Y. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, Association for Computational Linguistics (2003), 173--180.
[28]
Xu, B., Gao, G., Fussell, S. R., and Cosley, D. Improving machine translation by showing two outputs. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2014), 3743--3746.
[29]
Yeh, Y., and Wang, C.-w. Effects of multimedia vocabulary annotations and learning styles on vocabulary learning. Calico Journal (2003), 131--144.
[30]
Yu, M. New Oriental GRE Vocabulary Selection. Qunyan Press, 2005.
[31]
Zhou, J., Dai, X., and Wang, P. Foreign language learning based on video scenes. In Education Technology and Computer (ICETC), 2010 2nd International Conference on, vol. 2, IEEE (2010), V2--350.

Cited By

View all
  • (2024)NariTan: Enhancing Second Language Vocabulary Learning Through Non-Human Avatar Embodiment in Immersive Virtual RealityMultimodal Technologies and Interaction10.3390/mti81000938:10(93)Online publication date: 18-Oct-2024
  • (2024)RetAssist: Facilitating Vocabulary Learners with Generative Images in Story Retelling PracticesProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661581(2019-2036)Online publication date: 1-Jul-2024
  • (2022)Auto-Generating Multimedia Language Learning Material for Children with Off-the-Shelf AIProceedings of Mensch und Computer 202210.1145/3543758.3543777(96-105)Online publication date: 4-Sep-2022
  • Show More Cited By

Index Terms

  1. ViVo: Video-Augmented Dictionary for Vocabulary Learning

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems
    May 2017
    7138 pages
    ISBN:9781450346559
    DOI:10.1145/3025453
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 May 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. dictionary
    2. movie clips
    3. subtitles
    4. vocabulary learning

    Qualifiers

    • Research-article

    Funding Sources

    • Tsinghua University Research Funding
    • National Key Research and Development Plan
    • NExT Search Centre by the Singapore National Research Foundation
    • Natural Science Foundation of China

    Conference

    CHI '17
    Sponsor:

    Acceptance Rates

    CHI '17 Paper Acceptance Rate 600 of 2,400 submissions, 25%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)58
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 23 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)NariTan: Enhancing Second Language Vocabulary Learning Through Non-Human Avatar Embodiment in Immersive Virtual RealityMultimodal Technologies and Interaction10.3390/mti81000938:10(93)Online publication date: 18-Oct-2024
    • (2024)RetAssist: Facilitating Vocabulary Learners with Generative Images in Story Retelling PracticesProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661581(2019-2036)Online publication date: 1-Jul-2024
    • (2022)Auto-Generating Multimedia Language Learning Material for Children with Off-the-Shelf AIProceedings of Mensch und Computer 202210.1145/3543758.3543777(96-105)Online publication date: 4-Sep-2022
    • (2019)EmoTan: enhanced flashcards for second language vocabulary learning with emotional binaural narrationResearch and Practice in Technology Enhanced Learning10.1186/s41039-019-0109-014:1Online publication date: 8-Nov-2019

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media