Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1109/ISM.2014.38guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Cineast: A Multi-feature Sketch-Based Video Retrieval Engine

Published: 10 December 2014 Publication History

Abstract

Despite the tremendous importance and availability of large video collections, support for video retrieval is still rather limited and is mostly tailored to very concrete use cases and collections. In image retrieval, for instance, standard keyword search on the basis of manual annotations and content-based image retrieval, based on the similarity to query image (s), are well established search paradigms, both in academic prototypes and in commercial search engines. Recently, with the proliferation of sketch-enabled devices, also sketch-based retrieval has received considerable attention. The latter two approaches are based on intrinsic image features and rely on the representation of the objects of a collection in the feature space. In this paper, we present Cineast, a multi-feature sketch-based video retrieval engine. The main objective of Cineast is to enable a smooth transition from content-based image retrieval to content-based video retrieval and to support powerful search paradigms in large video collections on the basis of user-provided sketches as query input. Cineast is capable of retrieving video sequences based on edge or color sketches as query input and even supports one or multiple exemplary video sequences as query input. Moreover, Cineast also supports a novel approach to sketch-based motion queries by allowing a user to specify the motion of objects within a video sequence by means of (partial) flow fields, also specified via sketches. Using an emergent combination of multiple different features, Cineast is able to universally retrieve video (sequences) without the need for prior knowledge or semantic understanding. The evaluation with a general purpose video collection has shown the effectiveness and the efficiency of the Cineast approach.

Cited By

View all
  • (2024)Spatiotemporal Lifelog Analytics in Virtual Reality with vitrivr-VRProceedings of the 7th Annual ACM Workshop on the Lifelog Search Challenge10.1145/3643489.3661113(7-11)Online publication date: 10-Jun-2024
  • (2024)Exploring Multimedia Vector Spaces with vitrivr-VRMultiMedia Modeling10.1007/978-3-031-53302-0_27(317-323)Online publication date: 29-Jan-2024
  • (2023)The Best of Both Worlds: Lifelog Retrieval with a Desktop-Virtual Reality Hybrid SystemProceedings of the 6th Annual ACM Lifelog Search Challenge10.1145/3592573.3593107(65-68)Online publication date: 12-Jun-2023
  • Show More Cited By

Index Terms

  1. Cineast: A Multi-feature Sketch-Based Video Retrieval Engine
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Guide Proceedings
      ISM '14: Proceedings of the 2014 IEEE International Symposium on Multimedia
      December 2014
      403 pages
      ISBN:9781479943111

      Publisher

      IEEE Computer Society

      United States

      Publication History

      Published: 10 December 2014

      Author Tags

      1. Content-based Information Retrieval
      2. Motion-based Video Retrieval
      3. Video Retrieval

      Qualifiers

      • Article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 01 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Spatiotemporal Lifelog Analytics in Virtual Reality with vitrivr-VRProceedings of the 7th Annual ACM Workshop on the Lifelog Search Challenge10.1145/3643489.3661113(7-11)Online publication date: 10-Jun-2024
      • (2024)Exploring Multimedia Vector Spaces with vitrivr-VRMultiMedia Modeling10.1007/978-3-031-53302-0_27(317-323)Online publication date: 29-Jan-2024
      • (2023)The Best of Both Worlds: Lifelog Retrieval with a Desktop-Virtual Reality Hybrid SystemProceedings of the 6th Annual ACM Lifelog Search Challenge10.1145/3592573.3593107(65-68)Online publication date: 12-Jun-2023
      • (2023)Multi-Mode Clustering for Graph-Based Lifelog RetrievalProceedings of the 6th Annual ACM Lifelog Search Challenge10.1145/3592573.3593102(36-40)Online publication date: 12-Jun-2023
      • (2023)A Comparison of Video Browsing Performance between Desktop and Virtual Reality InterfacesProceedings of the 2023 ACM International Conference on Multimedia Retrieval10.1145/3591106.3592292(535-539)Online publication date: 12-Jun-2023
      • (2022)Multimodal Interactive Lifelog Retrieval with vitrivr-VRProceedings of the 5th Annual on Lifelog Search Challenge10.1145/3512729.3533008(38-42)Online publication date: 27-Jun-2022
      • (2022)vitrivr at the Lifelog Search Challenge 2022Proceedings of the 5th Annual on Lifelog Search Challenge10.1145/3512729.3533003(27-31)Online publication date: 27-Jun-2022
      • (2021)Interactive Multimodal Lifelog Retrieval with vitrivr at LSC 2021Proceedings of the 4th Annual on Lifelog Search Challenge10.1145/3463948.3469062(35-39)Online publication date: 21-Aug-2021
      • (2021)Exploring Intuitive Lifelog Retrieval and Interaction Modes in Virtual Reality with vitrivr-VRProceedings of the 4th Annual on Lifelog Search Challenge10.1145/3463948.3469061(17-22)Online publication date: 21-Aug-2021
      • (2020)Towards Using Semantic-Web Technologies for Multi-Modal Knowledge Graph ConstructionProceedings of the 28th ACM International Conference on Multimedia10.1145/3394171.3416292(4645-4649)Online publication date: 12-Oct-2020
      • Show More Cited By

      View Options

      View options

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media