Topological autoencoders
International conference on machine learning, 2020•proceedings.mlr.press
We propose a novel approach for preserving topological structures of the input space in
latent representations of autoencoders. Using persistent homology, a technique from
topological data analysis, we calculate topological signatures of both the input and latent
space to derive a topological loss term. Under weak theoretical assumptions, we construct
this loss in a differentiable manner, such that the encoding learns to retain multi-scale
connectivity information. We show that our approach is theoretically well-founded and that it …
latent representations of autoencoders. Using persistent homology, a technique from
topological data analysis, we calculate topological signatures of both the input and latent
space to derive a topological loss term. Under weak theoretical assumptions, we construct
this loss in a differentiable manner, such that the encoding learns to retain multi-scale
connectivity information. We show that our approach is theoretically well-founded and that it …
Abstract
We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.
proceedings.mlr.press