Semantic-aware representation blending for multi-label image recognition with partial labels

T Pu, T Chen, H Wu, L Lin - Proceedings of the AAAI conference on …, 2022 - ojs.aaai.org
Proceedings of the AAAI conference on artificial intelligence, 2022ojs.aaai.org
Training the multi-label image recognition models with partial labels, in which merely some
labels are known while others are unknown for each image, is a considerably challenging
and practical task. To address this task, current algorithms mainly depend on pre-training
classification or similarity models to generate pseudo labels for the unknown labels.
However, these algorithms depend on sufficient multi-label annotations to train the models,
leading to poor performance especially with low known label proportion. In this work, we …
Abstract
Training the multi-label image recognition models with partial labels, in which merely some labels are known while others are unknown for each image, is a considerably challenging and practical task. To address this task, current algorithms mainly depend on pre-training classification or similarity models to generate pseudo labels for the unknown labels. However, these algorithms depend on sufficient multi-label annotations to train the models, leading to poor performance especially with low known label proportion. In this work, we propose to blend category-specific representation across different images to transfer information of known labels to complement unknown labels, which can get rid of pre-training models and thus does not depend on sufficient annotations. To this end, we design a unified semantic-aware representation blending (SARB) framework that exploits instance-level and prototype-level semantic representation to complement unknown labels by two complementary modules: 1) an instance-level representation blending (ILRB) module blends the representations of the known labels in an image to the representations of the unknown labels in another image to complement these unknown labels. 2) a prototype-level representation blending (PLRB) module learns more stable representation prototypes for each category and blends the representation of unknown labels with the prototypes of corresponding labels to complement these labels. Extensive experiments on the MS-COCO, Visual Genome, Pascal VOC 2007 datasets show that the proposed SARB framework obtains superior performance over current leading competitors on all known label proportion settings, ie, with the mAP improvement of 4.6%, 4.6%, 2.2% on these three datasets when the known label proportion is 10%. Codes are available at https://github. com/HCPLab-SYSU/HCP-MLR-PL.
ojs.aaai.org