Boosting Few-Shot Learning via Attentive Feature Regularization

Authors

  • Xingyu Zhu University of Science and Technology of China
  • Shuo Wang University of Science and Technology of China
  • Jinda Lu University of Science and Technology of China
  • Yanbin Hao University of Science and Technology of China
  • Haifeng Liu Brain-Inspired Technology Co., Ltd.
  • Xiangnan He University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v38i7.28614

Keywords:

CV: Representation Learning for Vision, CV: Multi-modal Vision

Abstract

Few-shot learning (FSL) based on manifold regularization aims to improve the recognition capacity of novel objects with limited training samples by mixing two samples from different categories with a blending factor. However, this mixing operation weakens the feature representation due to the linear interpolation and the overlooking of the importance of specific channels. To solve these issues, this paper proposes attentive feature regularization (AFR) which aims to improve the feature representativeness and discriminability. In our approach, we first calculate the relations between different categories of semantic labels to pick out the related features used for regularization. Then, we design two attention-based calculations at both the instance and channel levels. These calculations enable the regularization procedure to focus on two crucial aspects: the feature complementarity through adaptive interpolation in related categories and the emphasis on specific feature channels. Finally, we combine these regularization strategies to significantly improve the classifier performance. Empirical studies on several popular FSL benchmarks demonstrate the effectiveness of AFR, which improves the recognition accuracy of novel categories without the need to retrain any feature extractor, especially in the 1-shot setting. Furthermore, the proposed AFR can seamlessly integrate into other FSL methods to improve classification performance.

Published

2024-03-24

How to Cite

Zhu, X., Wang, S., Lu, J., Hao, Y., Liu, H., & He, X. (2024). Boosting Few-Shot Learning via Attentive Feature Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 7793-7801. https://doi.org/10.1609/aaai.v38i7.28614

Issue

Section

AAAI Technical Track on Computer Vision VI