Long Tail Image Generation Through Feature Space Augmentation and Iterated Learning

R Elberg, D Parra, M Petrache - arXiv preprint arXiv:2405.01705, 2024 - arxiv.org
arXiv preprint arXiv:2405.01705, 2024arxiv.org
Image and multimodal machine learning tasks are very challenging to solve in the case of
poorly distributed data. In particular, data availability and privacy restrictions exacerbate
these hurdles in the medical domain. The state of the art in image generation quality is held
by Latent Diffusion models, making them prime candidates for tackling this problem.
However, a few key issues still need to be solved, such as the difficulty in generating data
from under-represented classes and a slow inference process. To mitigate these issues, we …
Image and multimodal machine learning tasks are very challenging to solve in the case of poorly distributed data. In particular, data availability and privacy restrictions exacerbate these hurdles in the medical domain. The state of the art in image generation quality is held by Latent Diffusion models, making them prime candidates for tackling this problem. However, a few key issues still need to be solved, such as the difficulty in generating data from under-represented classes and a slow inference process. To mitigate these issues, we propose a new method for image augmentation in long-tailed data based on leveraging the rich latent space of pre-trained Stable Diffusion Models. We create a modified separable latent space to mix head and tail class examples. We build this space via Iterated Learning of underlying sparsified embeddings, which we apply to task-specific saliency maps via a K-NN approach. Code is available at https://github.com/SugarFreeManatee/Feature-Space-Augmentation-and-Iterated-Learning
arxiv.org