[edit]
On the Episodic Difficulty of Few-shot Learning
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:48-63, 2023.
Abstract
Dog vs. hot dog and dog vs. wolf, which one tends to
be a harder comparison task? While simple, this
question can be meaningful for few-shot
classification. Few-shot learning enables trained
models to recognize unseen classes through just a
few labelled samples. As such, trained few-shot
models usually have to possess the ability to assess
the similarity degree between the unlabelled and
labelled samples. In each few-shot learning episode,
a combination of the labelled support set and
unlabelled query set are sampled from the training
dataset for model-training. In the episodic settings
of few-shot learning, most algorithms draw the data
samples uniformly at random for training. However,
this approach disregards concepts of difficulty of
each training episode, which may make a difference.
After all, it is usually easier to differentiate
between a dog and a hot dog, versus the dog and a
wolf. Therefore, in this paper, we delve into the
concept of episodic difficulty, or difficulty of
each training episode, discovering several insights
and proposing strategies to utilize the difficulty.
Firstly, defining episodic difficulty as a training
loss, we find and study the correlation between
episodic difficulty and visual similarity among data
samples in each episode. Secondly, we assess the
respective usefulness of easy and difficult episodes
for the training process. Lastly, based on the
assessment, we design a curriculum for few-shot
learning to support training with incremental
difficulty. We observe that such an approach can
achieve faster convergence for few-shot algorithms,
reducing the average training time by around 50%.
It can also make meta-learning algorithms achieve an
increase in final testing accuracy scores.