Nothing Special   »   [go: up one dir, main page]

Skip to main content

Shared Deep Kernel Learning for Dimensionality Reduction

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10939))

Included in the following conference series:

  • 3547 Accesses

Abstract

Deep Kernel Learning (DKL) has been proven to be an effective method to learn complex feature representation by combining the structural properties of deep learning with the nonparametric flexibility of kernel methods, which can be naturally used for supervised dimensionality reduction. However, if limited training data are available its performance could be compromised because parameters of the deep structure embedded into the model are large and difficult to be efficiently optimized. In order to address this issue, we propose the Shared Deep Kernel Learning model by combining DKL with shared Gaussian Process Latent Variable Model. The novel method could not only bring the improved performance without increasing model complexity but also learn the hierarchical features by sharing the deep kernel. The comparison with some supervised dimensionality reduction methods and deep learning approach verify the advantages of the proposed model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://people.cs.uchicago.edu/~dinoj/manifold/swissroll.html.

References

  1. Bilmes, J.A., Malkin, J., Li, X., Harada, S., Kilanski, K., Kirchhoff, K., Wright, R.: The vocal joystick. In: IEEE Conference on Acoustics, Speech and Signal Processing (2006)

    Google Scholar 

  2. Damianou, A.C., Lawrence, N.D.: Deep Gaussian processes. In: Artificial Intelligence and Statistics (AISTATS) (2013)

    Google Scholar 

  3. Ek, C.H.: Shared Gaussian process latent variables models. Ph.D. thesis, Oxford Brookes University (2009)

    Google Scholar 

  4. Eleftheriadis, S., Rudovic, O., Pantic, M.: Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition. IEEE Trans. Image Process. 24(1), 189–204 (2015)

    Article  MathSciNet  Google Scholar 

  5. Gao, J., Zhang, J., Tien, D.: Relevance units latent variable model and nonlinear dimensionality reduction. IEEE Trans. Neural Netw. 21, 123–135 (2010)

    Article  Google Scholar 

  6. Gao, X., Wang, X., Tao, D., Li, X.: Supervised Gaussian process latent variable model for dimensionality reduction. IEEE Trans. Syst. Man Cybern. Part B Cybern. 41(99), 425–434 (2011)

    Google Scholar 

  7. He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.J.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005)

    Article  Google Scholar 

  8. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  9. Jiang, X., Fang, X., Chen, Z., Gao, J., Jiang, J., Cai, Z.: Supervised Gaussian process latent variable model for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 14(10), 1760–1764 (2017)

    Article  Google Scholar 

  10. Jiang, X., Gao, J., Wang, T., Shi, D.: TPSLVM: a dimensionality reduction algorithm based on thin plate splines. IEEE Trans. Cybern. 44(10), 1795–1807 (2014)

    Article  Google Scholar 

  11. Jiang, X., Gao, J., Wang, T., Zheng, L.: Supervised latent linear Gaussian process latent variable model for dimensionality reduction. IEEE Trans. Syst. Man Cybern. Part B Cybern. 42(6), 1620–1632 (2012)

    Article  Google Scholar 

  12. Kouropteva, O., Okun, O., Pietikäinen, M.: Supervised locally linear embedding algorithm for pattern recognition. Pattern Recogn. Image Anal. 2652, 386–394 (2003)

    Article  Google Scholar 

  13. Lawrence, N.: Probabilistic non-linear principal component analysis with gaussian process latent variable models. J. Mach. Learn. Res. 6, 1783–1816 (2005)

    MathSciNet  MATH  Google Scholar 

  14. Li, J., Zhang, B., Zhang, D.: Shared autoencoder Gaussian process latent variable model for visual classificatio. IEEE Trans. Neural Netw. Learn. Syst. (2017, in Press)

    Google Scholar 

  15. Li, X., Shu, L.: Kernel based nonlinear dimensionality reduction for microarray gene expression data analysis. Expert Syst. Appl. 36, 7644–7650 (2009)

    Article  Google Scholar 

  16. Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml

  17. van der Maaten, L., Postma, E.O., van den Herik, H.J.: Dimensionality reduction: a comparative review. Technical report, Tilburg University (2008)

    Google Scholar 

  18. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  19. Rue, H., Held, L.: Gaussian Markov Random Fields: Theory and Applications. Chapman and Hall/CRC, Boca Raton (2005)

    Book  Google Scholar 

  20. Scholkopf, B., Smola, A., Muller, K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10(5), 1299–1319 (1998)

    Article  Google Scholar 

  21. Snoek, J., Adams, R.P., Larochelle, H.: Nonparametric guidance of autoencoder representations using label information. J. Mach. Learn. Res. 13, 2567–2588 (2012)

    MathSciNet  MATH  Google Scholar 

  22. Urtasun, R., Darrell, T.: Discriminative Gaussian process latent variable model for classification. In: International Conference on Machine learning (ICML), pp. 927–934. ACM (2007)

    Google Scholar 

  23. Wang, W., Huang, Y., Wang, Y., Wang, L.: Generalized autoencoder: a neural network framework for dimensionality reduction. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), pp. 496–503 (2014)

    Google Scholar 

  24. Wilson, A.G., Hu, Z., Salakhutdinov, R., Xing, E.P.: Deep kernel learning. In: The 19th International Conference on Artificial Intelligence and Statistics (AISTATS) (2016)

    Google Scholar 

  25. Wilson, A.G., Nickisch, H.: Kernel interpolation for scalable structured Gaussian processes (KISS-GP). In: International Conference on Machine Learning (2015)

    Google Scholar 

  26. Yang, J., Jin, Z., Yang, J.Y., Zhang, D., Frangi, A.F.: Essence of kernel fisher discriminant: KPCA plus LDA. Pattern Recogn. 37, 2097–2100 (2004)

    Article  Google Scholar 

  27. Yu, S., Yu, K., Tresp, V., Kriegel, H.P., Wu, M.: Supervised probabilistic principal component analysis. In: International Conference on Knowledge Discovery and Data Mining (KDD), pp. 464–473. ACM Press (2006)

    Google Scholar 

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grants 61402424, 61603355, 61773355, the National Science and Technology Major Project under Grant 2016ZX05014003-003, and the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinwei Jiang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, X., Gao, J., Liu, X., Cai, Z., Zhang, D., Liu, Y. (2018). Shared Deep Kernel Learning for Dimensionality Reduction. In: Phung, D., Tseng, V., Webb, G., Ho, B., Ganji, M., Rashidi, L. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2018. Lecture Notes in Computer Science(), vol 10939. Springer, Cham. https://doi.org/10.1007/978-3-319-93040-4_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93040-4_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93039-8

  • Online ISBN: 978-3-319-93040-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics