Keywords: generative modeling, domain generalization
Abstract: We consider the problem of domain generalization, namely, how to learn representations
given data from a set of domains that generalize to data from a previously
unseen domain. We propose the domain invariant VAE (DIVA), a generative
model that tackles this problem by learning three independent latent subspaces,
one for the class, one for the domain and one for the object itself. In addition,
we highlight that due to the generative nature of our model we can also incorporate
unlabeled data from known or previously unseen domains. This property is
highly desirable in fields like medical imaging where labeled data is scarce. We
experimentally evaluate our model on the rotated MNIST benchmark where we
show that (i) the learned subspaces are indeed complementary to each other, (ii)
we improve upon recent works on this task and (iii) incorporating unlabelled data
can boost the performance even further.
3 Replies
Loading