Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1065385.1065511acmconferencesArticle/Chapter ViewAbstractPublication PagesjcdlConference Proceedingsconference-collections
Article

EVIADA: ethnomusicological video for instruction and analysis digital archive

Published: 07 June 2005 Publication History

Abstract

The field of ethnomusicology depends heavily on ethnographic research or "fieldwork" by researchers that often involves the capture and subsequent analysis of audio and video information, to help document and understand the musical practices of people all over the world. Ethnomusicologists have used a variety of recording technologies over the years to capture film and video, and much of this footage lies in researchers' offices and home basements. No systematic mechanism exists for preserving and providing access to this video for other students and scholars.The Ethnomusicological Video for Instruction and Analysis Digital Archive (EVIADA) [1] is a multi-year collaborative project between Indiana University and the University of Michigan to create a digital archive for field video recordings captured by ethnomusicology researchers. This digital archive will serve both to preserve this content for future generations of scholars and also to provide a resource to support teaching and learning in ethnomusicology, anthropology, and related disciplines. The creation of EVIADA has involved a unique collaboration between ethnomusicologists, librarians, archivists, and technologists in carrying out all stages of the project, including video digitization, metadata creation, and system and user interface design.As part of the project, we are developing several software tools: The Segmentation/Annotation Tool is a Java Swing application written using Apple's QuickTime for Java API. It allows an ethnomusicologist who is contributing a video collection to the archive to divide that video into a hierarchy of segments, attach free-text descriptions and controlled vocabulary terms to each segment, and output this information in the form of a METS [3] XML document incorporating MODS [2] descriptive metadata records. This METS document can then be ingested into downstream archival and delivery systems. We hope to evolve this software into a more general-purpose tool for the creation of METS documents for video objects.We are also building a web-based user interface on top of the Fedora digital repository system to allow users to search and browse video content in the collection via the descriptive metadata and annotations, making appropriate use of controlled vocabulary thesauri to increase search recall.

References

[1]
EVIA Digital Archive. http://www.indiana.edu/~eviada/
[2]
Library of Congress. Metadata Object Description Schema. http://www.loc.gov/standards/mods/
[3]
Library of Congress. Metadata Encoding and Transmission Standard. http://www.loc.gov/standards/mets/

Cited By

View all
  • (2018)Multimodal query-level fusion for efficient multimedia information retrievalInternational Journal of Intelligent Systems10.1002/int.2192033:10(2019-2037)Online publication date: 31-May-2018
  • (2015)Efficient Multimedia Information Retrieval with Query Level FusionFlexible Query Answering Systems 201510.1007/978-3-319-26154-6_28(367-379)Online publication date: 21-Oct-2015
  • (2009)M3LProceedings of the 2009 Sixth International Conference on Information Technology: New Generations10.1109/ITNG.2009.8(1067-1072)Online publication date: 27-Apr-2009
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
JCDL '05: Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries
June 2005
450 pages
ISBN:1581138768
DOI:10.1145/1065385
  • General Chair:
  • Mary Marlino,
  • Program Chairs:
  • Tamara Sumner,
  • Frank Shipman
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 June 2005

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. annotation
  2. ethnomusicology
  3. metadata
  4. music
  5. video

Qualifiers

  • Article

Conference

JCDL05

Acceptance Rates

Overall Acceptance Rate 415 of 1,482 submissions, 28%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 18 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2018)Multimodal query-level fusion for efficient multimedia information retrievalInternational Journal of Intelligent Systems10.1002/int.2192033:10(2019-2037)Online publication date: 31-May-2018
  • (2015)Efficient Multimedia Information Retrieval with Query Level FusionFlexible Query Answering Systems 201510.1007/978-3-319-26154-6_28(367-379)Online publication date: 21-Oct-2015
  • (2009)M3LProceedings of the 2009 Sixth International Conference on Information Technology: New Generations10.1109/ITNG.2009.8(1067-1072)Online publication date: 27-Apr-2009
  • (2009)Unified Multimodal Search Framework for Multimedia Information RetrievalAdvanced Techniques in Computing Sciences and Software Engineering10.1007/978-90-481-3660-5_22(129-136)Online publication date: 15-Dec-2009

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media