Enhancing Supervised Visualization through Autoencoder and Random Forest Proximities for Out-of-Sample Extension
The value of supervised dimensionality reduction lies in its ability to uncover meaningful
connections between data features and labels. Common dimensionality reduction methods
embed a set of fixed, latent points, but are not capable of generalizing to an unseen test set.
In this paper, we provide an out-of-sample extension method for the random forest-based
supervised dimensionality reduction method, RF-PHATE, combining information learned
from the random forest model with the function-learning capabilities of autoencoders …
connections between data features and labels. Common dimensionality reduction methods
embed a set of fixed, latent points, but are not capable of generalizing to an unseen test set.
In this paper, we provide an out-of-sample extension method for the random forest-based
supervised dimensionality reduction method, RF-PHATE, combining information learned
from the random forest model with the function-learning capabilities of autoencoders …
The value of supervised dimensionality reduction lies in its ability to uncover meaningful connections between data features and labels. Common dimensionality reduction methods embed a set of fixed, latent points, but are not capable of generalizing to an unseen test set. In this paper, we provide an out-of-sample extension method for the random forest-based supervised dimensionality reduction method, RF-PHATE, combining information learned from the random forest model with the function-learning capabilities of autoencoders. Through quantitative assessment of various autoencoder architectures, we identify that networks that reconstruct random forest proximities are more robust for the embedding extension problem. Furthermore, by leveraging proximity-based prototypes, we achieve a 40% reduction in training time without compromising extension quality. Our method does not require label information for out-of-sample points, thus serving as a semi-supervised method, and can achieve consistent quality using only 10% of the training data.
arxiv.org