Nothing Special   »   [go: up one dir, main page]



Multi-CLIP: Contrastive Vision-Language Pre-training for Question Answering tasks in 3D Scenes
Alexandros Delitzas (ETH Zurich),* Maria Parelli (ETH Zurich), Nikolas Hars (ETH Zurich), Georgios Vlassis (ETH Zurich), Sotirios-Konstantinos Anagnostidis (ETH Zurich), Gregor Bachmann (ETH Zurich), Thomas Hofmann (ETH Zurich)The 34th British Machine Vision Conference

Abstract

Training models to apply common-sense linguistic knowledge and visual concepts from 2D images to 3D scene understanding is a promising direction that researchers have only recently started to explore. However, it still remains understudied whether 2D distilled knowledge can provide useful representations for downstream 3D vision-language tasks such as 3D question answering. In this paper, we propose a novel 3D pre-training Vision-Language method, namely Multi-CLIP, that enables a model to learn language-grounded and transferable 3D scene point cloud representations. We leverage the representational power of the CLIP model by maximizing the agreement between the encoded 3D scene features and the corresponding 2D multi-view image and text embeddings in the CLIP space via a contrastive objective. To validate our approach, we consider the challenging downstream tasks of 3D Visual Question Answering (3D-VQA) and 3D Situated Question Answering (3D-SQA). To this end, we develop novel multi-modal transformer-based architectures and we demonstrate how our pre-training method can benefit their performance. Quantitative and qualitative experimental results show that Multi-CLIP outperforms state-of-the-art works across the downstream tasks of 3D-VQA and 3D-SQA and leads to a well-structured 3D scene feature space.

Video



Citation

@inproceedings{Delitzas_2023_BMVC,
author    = {Alexandros Delitzas and Maria Parelli and Nikolas Hars and Georgios Vlassis and Sotirios-Konstantinos Anagnostidis and Gregor Bachmann and Thomas Hofmann},
title     = {Multi-CLIP: Contrastive Vision-Language Pre-training for Question Answering tasks in 3D Scenes},
booktitle = {34th British Machine Vision Conference 2023, {BMVC} 2023, Aberdeen, UK, November 20-24, 2023},
publisher = {BMVA},
year      = {2023},
url       = {https://papers.bmvc2023.org/0748.pdf}
}


Copyright © 2023 The British Machine Vision Association and Society for Pattern Recognition
The British Machine Vision Conference is organised by The British Machine Vision Association and Society for Pattern Recognition. The Association is a Company limited by guarantee, No.2543446, and a non-profit-making body, registered in England and Wales as Charity No.1002307 (Registered Office: Dept. of Computer Science, Durham University, South Road, Durham, DH1 3LE, UK).

Imprint | Data Protection