Feature Unlearning for Pre-trained GANs and VAEs

Authors

  • Saemi Moon POSTECH, CSE
  • Seunghyuk Cho POSTECH, GSAI
  • Dongwoo Kim POSTECH, CSE POSTECH, GSAI

DOI:

https://doi.org/10.1609/aaai.v38i19.30138

Keywords:

General

Abstract

We tackle the problem of feature unlearning from a pre-trained image generative model: GANs and VAEs. Unlike a common unlearning task where an unlearning target is a subset of the training set, we aim to unlearn a specific feature, such as hairstyle from facial images, from the pre-trained generative models. As the target feature is only presented in a local region of an image, unlearning the entire image from the pre-trained model may result in losing other details in the remaining region of the image. To specify which features to unlearn, we collect randomly generated images that contain the target features. We then identify a latent representation corresponding to the target feature and then use the representation to fine-tune the pre-trained model. Through experiments on MNIST, CelebA, and FFHQ datasets, we show that target features are successfully removed while keeping the fidelity of the original models. Further experiments with an adversarial attack show that the unlearned model is more robust under the presence of malicious parties.

Published

2024-03-24

How to Cite

Moon, S., Cho, S., & Kim, D. (2024). Feature Unlearning for Pre-trained GANs and VAEs. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21420-21428. https://doi.org/10.1609/aaai.v38i19.30138

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track