Nothing Special   »   [go: up one dir, main page]

Skip to main content

Predicting time series with support vector machines

  • Part VII: Prediction, Forecasting and Monitoring
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN'97 (ICANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1327))

Included in the following conference series:

Abstract

Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. Aizerman, E. Braverman, L. Rozonoér (1964), Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821–837.

    Google Scholar 

  2. C.M. Bishop (1995), Neural networks for pattern recognition, Oxford U. Press.

    Google Scholar 

  3. B. E. Boser, I M. Guyon, and V. N. Vapnik. (1992), A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proc. of COLT'92, 144.

    Google Scholar 

  4. C. Burges, B. Schölkopf. (1997), Improving speed and accuracy of Support Vector Machines, NIPS'96.

    Google Scholar 

  5. H. Drucker, C. Burges, L. Kaufman, A. Smola, V. Vapnik (1997), Linear support vector regression machines, NIPS'96.

    Google Scholar 

  6. P. J. Huber (1972), Robust statistics: a review. Ann. Statist., 43:1041.

    Google Scholar 

  7. M. C. Mackey and L. Glass (1977), Science, 197:287–289.

    Google Scholar 

  8. J. Moody and C. Darken (1989), Neural Computation, 1(2):281–294.

    Google Scholar 

  9. K. Pawelzik, J. Kohlmorgen, K.-R. Müller (1996), Neural Comp., 8(2):342–358.

    Google Scholar 

  10. K. Pawelzik, K.-R. Müller, J. Kohlmorgen (1996), Prediction of Mixtures, in ICANN '96, LNCS 1112, Springer Berlin, 127–132 and GMD Tech.Rep. 1069.

    Google Scholar 

  11. B. Schölkopf, C. Burges, V. Vapnik (1995), Extracting support data for a given task. KDD'95 (eds. U. Fayyad, R. Uthurusamy), AAAI Press, Menlo Park, CA.

    Google Scholar 

  12. A. J. Smola, B. Sch6lkopf (1997) On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica to appear.

    Google Scholar 

  13. A.S. Weigend, N.A. Gershenfeld (Eds.) (1994), Time Series Prediction: Forecasting the Future and Understanding the Past, Addison-Wesley.

    Google Scholar 

  14. V. Vapnik (1995), The Nature of Statistical Learning Theory. Springer NY.

    Google Scholar 

  15. V. Vapnik, S. Golowich, A. Smola (1997), Support vector method for function approximation, regression estimation, and signal processing, NIPS'96.

    Google Scholar 

  16. X. Zhang, J. Hutchinson (1994). Simple architectures on fast machines: practical issues in nonlinear time series prediction in [13].

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wulfram Gerstner Alain Germond Martin Hasler Jean-Daniel Nicoud

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Müller, K.R., Smola, A.J., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V. (1997). Predicting time series with support vector machines. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, JD. (eds) Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, vol 1327. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020283

Download citation

  • DOI: https://doi.org/10.1007/BFb0020283

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63631-1

  • Online ISBN: 978-3-540-69620-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics