As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Many few-shot image classification methods focus on learning a fixed feature space from sufficient samples of seen classes that can be readily transferred to unseen classes. For different tasks, the feature space is either kept the same or only adjusted by generating attentions to query samples. However, the discriminative channels and spatial parts for comparing different query and support images in different tasks are usually different. In this paper, we propose a task-sensitive discriminative mutual attention (TDMA) network to produce task-and-sample-specific features. For each task, TDMA first generates a discriminative task embedding that encodes the inter-class separability and within-class scatter, and then employs the task embedding to enhance discriminative channels respective to this task. Given a specific query and different support images, TDMA further incorporates the task embedding and long-range dependencies to locate the discriminative parts in the spatial dimension. Experimental results on miniImageNet, tieredImageNet and FC100 datasets show the effectiveness of the proposed model.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.