Nothing Special   »   [go: up one dir, main page]

Skip to main content

Position-Based Content Attention for Time Series Forecasting with Sequence-to-Sequence RNNs

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10638))

Included in the following conference series:

Abstract

We propose here an extended attention model for sequence-to-sequence recurrent neural networks (RNNs) designed to capture (pseudo-)periods in time series. This extended attention model can be deployed on top of any RNN and is shown to yield state-of-the-art performance for time series forecasting on several univariate and multivariate time series.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    This assumption is easy to satisfy by increasing the size of the history if the pseudo-periods are known or by resorting to a validation set to tune T.

  2. 2.

    We compared several methods for missing values, namely linear, non-linear spline and kernel based Fourier transform interpolation as well as padding for the RNN-based models. The best reconstruction was obtained with linear interpolation, hence its choice here.

  3. 3.

    http://deeplearning.net/software/theano/.

  4. 4.

    https://lasagne.readthedocs.io.

References

  1. De Gooijer, J.G., Hyndman, R.J.: 25 years of time series forecasting. Int. J. Forecast. 22(3), 443–473 (2006)

    Article  Google Scholar 

  2. Bontempi, G., Ben Taieb, S., Le Borgne, Y.-A.: Machine learning strategies for time series forecasting. In: Aufaure, M.-A., Zimányi, E. (eds.) eBISS 2012. LNBIP, vol. 138, pp. 62–77. Springer, Heidelberg (2013). doi:10.1007/978-3-642-36318-4_3

    Chapter  Google Scholar 

  3. Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013)

  4. Weston, J., Chopra, S., Bordes, A.: Memory networks. CoRR abs/1410.3916 (2014)

    Google Scholar 

  5. Weston, J., Bordes, A., Chopra, S., Mikolov, T.: Towards AI-Complete question answering: A set of prerequisite toy tasks. CoRR abs/1502.05698 (2015)

    Google Scholar 

  6. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwinska, A., Colmenarejo, S.G., Grefenstette, E., Ramalho, T., Agapiou, J., Badia, A.P., Hermann, K.M., Zwols, Y., Ostrovski, G., Cain, A., King, H., Summerfield, C., Blunsom, P., Kavukcuoglu, K., Hassabis, D.: Hybrid computing using a neural network with dynamic external memory. Nature 538(7626), 471–476 (2016)

    Article  Google Scholar 

  7. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  8. Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: NIPS, pp. 2692–2700 (2015)

    Google Scholar 

  9. Walker, G.: On periodicity in series of related terms. In: Proceedings of Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, vol. 131, no. 818, pp. 518–532 (1931)

    Google Scholar 

  10. Slutzky, E.: The summation of random causes as the source of cyclic processes. Econometrica: J. Econometr. Soc. 5(2), 105–146 (1937)

    Article  MATH  Google Scholar 

  11. Box, G.E., Jenkins, G.M.: Some recent advances in forecasting and control. J. Roy. Stat. Soc.: Ser. C (Appl. Stat.) 17(2), 91–109 (1968)

    MathSciNet  Google Scholar 

  12. Tiao, G.C., Box, G.E.: Modeling multiple time series with applications. J. Am. Stat. Assoc. 76(376), 802–816 (1981)

    MathSciNet  MATH  Google Scholar 

  13. Sapankevych, N.I., Sankar, R.: Time series prediction using support vector machines: a survey (2009)

    Google Scholar 

  14. Creamer, G.G., Freund, Y.: Predicting performance and quantifying corporate governance risk for Latin American ADRs and banks (2004)

    Google Scholar 

  15. Kusiak, A., Verma, A., Wei, X.: A data-mining approach to predict influent quality. Environ. Monit. Assess. 185(3), 2197–2210 (2013)

    Article  Google Scholar 

  16. Kane, M.J., Price, N., Scotch, M., Rabinowitz, P.: Comparison of arima and random forest time series models for prediction of avian influenza H5N1 outbreaks. BMC Bioinform. 15(1), 276 (2014)

    Article  Google Scholar 

  17. Connor, J., Atlas, L.E., Martin, D.R.: Recurrent networks and NARMA modeling. In: NIPS, pp. 301–308 (1991)

    Google Scholar 

  18. Giles, C.L., Lawrence, S., Tsoi, A.C.: Noisy time series prediction using recurrent neural networks and grammatical inference. Mach. Learn. 44(1–2), 161–183 (2001)

    Article  MATH  Google Scholar 

  19. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  20. Hsieh, T.J., Hsiao, H.F., Yeh, W.C.: Forecasting stock markets using wavelet transforms and recurrent neural networks: an integrated system based on artificial bee colony algorithm. Appl. Soft Comput. 11(2), 2510–2525 (2011)

    Article  Google Scholar 

  21. Längkvist, M., Karlsson, L., Loutfi, A.: A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recogn. Lett. 42, 11–24 (2014)

    Article  Google Scholar 

  22. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  23. Gers, F.A., Eck, D., Schmidhuber, J.: Applying LSTM to time series predictable through time-window approaches. In: Tagliaferri, R., Marinaro, M. (eds.) Neural Nets WIRN Vietri-01, pp. 669–676. Springer, London (2001). doi:10.1007/978-1-4471-0219-9_20

    Google Scholar 

  24. Gers, F.A., Schraudolph, N.N., Schmidhuber, J.: Learning precise timing with LSTM recurrent networks. J. Mach. Learn. Res. 3(8), 115–143 (2002)

    MathSciNet  MATH  Google Scholar 

  25. Ranzato, M., Szlam, A., Bruna, J., Mathieu, M., Collobert, R., Chopra, S.: Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604 (2014)

  26. Xingjian, S., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.K., Woo, W.c.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: Advances in Neural Information Processing Systems, pp. 802–810 (2015)

    Google Scholar 

  27. Lipton, Z.C., Kale, D.C., Elkan, C., Wetzell, R.: Learning to diagnose with LSTM recurrent neural networks. arXiv preprint arXiv:1511.03677 (2015)

  28. Riemer, M., Vempaty, A., Calmon, F.P., Heath III., F.F., Hull, R., Khabiri, E.: Correcting forecasts with multifactor neural attention. In: Proceedings of The 33rd International Conference on Machine Learning, pp. 3010–3019 (2016)

    Google Scholar 

  29. Choi, E., Bahadori, M.T., Sun, J., Kulas, J., Schuetz, A., Stewart, W.: RETAIN: an interpretable predictive model for healthcare using reverse time attention mechanism. In: NIPS, pp. 3504–3512 (2016)

    Google Scholar 

  30. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Sig. Process. 45(11), 2673–2681 (1997)

    Article  Google Scholar 

  31. Datasets: PSE: http://www.pse.pl. PW: https://globalweather.tamu.edu. AQ/OLD/AEP: http://archive.ics.uci.edu. NAB: https://numenta.com

  32. Hyndman, R., Khandakar, Y.: Automatic time series forecasting: the forecast package for R. J. Stat. Softw. 27(3), 1–22 (2008). https://www.jstatsoft.org/v027/i03

  33. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yagmur Gizem Cinar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cinar, Y.G., Mirisaee, H., Goswami, P., Gaussier, E., Aït-Bachir, A., Strijov, V. (2017). Position-Based Content Attention for Time Series Forecasting with Sequence-to-Sequence RNNs. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10638. Springer, Cham. https://doi.org/10.1007/978-3-319-70139-4_54

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70139-4_54

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70138-7

  • Online ISBN: 978-3-319-70139-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics