Deep Variational Sufficient Dimensionality Reduction
Abstract
We consider the problem of sufficient dimensionality reduction (SDR), where the high-dimensional observation is transformed to a low-dimensional sub-space in which the information of the observations regarding the label variable is preserved. We propose DVSDR, a deep variational approach for sufficient dimensionality reduction. The deep structure in our model has a bottleneck that represent the low-dimensional embedding of the data. We explain the SDR problem using graphical models and use the framework of variational autoencoders to maximize the lower bound of the log-likelihood of the joint distribution of the observation and label. We show that such a maximization problem can be interpreted as solving the SDR problem. DVSDR can be easily adopted to semi-supervised learning setting. In our experiment we show that DVSDR performs competitively on classification tasks while being able to generate novel data samples.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2018
- DOI:
- arXiv:
- arXiv:1812.07641
- Bibcode:
- 2018arXiv181207641B
- Keywords:
-
- Computer Science - Machine Learning;
- Statistics - Machine Learning