Nothing Special   »   [go: up one dir, main page]

×
Please click here if you are not redirected within a few seconds.
Nov 17, 2017 · Our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset.
This work proposes to incorporate generative processes into the cross-modal feature embedding, through which it is able to learn not only the global ...
Our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset.
Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models (CVPR 2018 Spotlight). Presenter: Yongxin (Richard) Wang. Page ...
Nov 17, 2017 · Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities.
Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with. Generative Models. Jiuxiang Gu Jianfei Cai Shafiq Joty. Li Niu. Gang Wang. Page 2 ...
Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval ...
[8] proposed a method to improve cross-modal retrieval using generative modeling, which constructs textual representations as visual features and then ...
Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models. Jiuxiang Gu, Jianfei Cai, Shafiq Joty, Li Niu, Gang Wang.
People also ask
Li Niu. Latest. Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models (Proceedings of the IEEE Conference on ...