Nothing Special   »   [go: up one dir, main page]

loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Tung Le 1 ; Khoa Pho 1 ; Thong Bui 2 ; 3 ; Huy Tien Nguyen 2 ; 3 and Minh Le Nguyen 1

Affiliations: 1 School of Information Science, Japan Advanced Institute of Science and Technology, Ishikawa, Japan ; 2 Faculty of Information Technology, University of Science, Ho Chi Minh city, Vietnam ; 3 Vietnam National University, Ho Chi Minh city, Vietnam

Keyword(s): Visual Question Classification, Object-less Image, Vision-language Model, Vision Transformer, VizWiz-VQA.

Abstract: Despite the long-standing appearance of question types in the Visual Question Answering dataset, Visual Question Classification does not received enough public interest in research. Different from general text classification, a visual question requires an understanding of visual and textual features simultaneously. Together with the enthusiasm and novelty of Visual Question Classification, the most important and practical goal we concentrate on is to deal with the weakness of Object Detection on object-less images. We thus propose an Object-less Visual Question Classification model, OL–LXMERT, to generate virtual objects replacing the dependence of Object Detection in previous Vision-Language systems. Our architecture is effective and powerful enough to digest local and global features of images in understanding the relationship between multiple modalities. Through our experiments in our modified VizWiz-VQC 2020 dataset of blind people, our Object-less LXMERT achieves promising resul ts in the brand-new multi-modal task. Furthermore, the detailed ablation studies show the strength and potential of our model in comparison to competitive approaches. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 65.254.225.175

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Le, T.; Pho, K.; Bui, T.; Nguyen, H. and Nguyen, M. (2022). Object-less Vision-language Model on Visual Question Classification for Blind People. In Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-547-0; ISSN 2184-433X, SciTePress, pages 180-187. DOI: 10.5220/0010797400003116

@conference{icaart22,
author={Tung Le. and Khoa Pho. and Thong Bui. and Huy Tien Nguyen. and Minh Le Nguyen.},
title={Object-less Vision-language Model on Visual Question Classification for Blind People},
booktitle={Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
year={2022},
pages={180-187},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010797400003116},
isbn={978-989-758-547-0},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART
TI - Object-less Vision-language Model on Visual Question Classification for Blind People
SN - 978-989-758-547-0
IS - 2184-433X
AU - Le, T.
AU - Pho, K.
AU - Bui, T.
AU - Nguyen, H.
AU - Nguyen, M.
PY - 2022
SP - 180
EP - 187
DO - 10.5220/0010797400003116
PB - SciTePress

<style> #socialicons>a span { top: 0px; left: -100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease-in-out; -o-transition: all 0.3s ease-in-out; -ms-transition: all 0.3s ease-in-out; transition: all 0.3s ease-in-out;} #socialicons>ahover div{left: 0px;} </style>