- Almgren, R., Chriss, N., 2001. Optimal execution of portfolio transactions. The Journal of Risk 3, 5–39.
Paper not yet in RePEc: Add citation now
Bao, W., Liu, X.y., 2019. Multi-agent deep reinforcement learning for liquidation strategy analysis.
Bayraktar, E., Ludkovski, M., 2014. Liquidation in limit order books with controlled intensity. Mathematical Finance 24, 627–650.
Bertsimas, D., Lo, A.W., 1998. Optimal control of execution costs. Journal of Financial Markets 1, 1–50.
Biais, B., Hillion, P., Spatt, C., 1995. An empirical analysis of the limit order book and the order flow in the Paris Bourse. The Journal of Finance 50, 1655–1689.
- Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., Zaremba, W., 2016. OpenAI Gym. arXiv 1606.01540.
Paper not yet in RePEc: Add citation now
Cao, C., Hansch, O., Wang, X., 2009. The information content of an open limit-order book. Journal of Futures Markets 29, 16–41.
Cartea, A., Jaimungal, S., 2015. Optimal execution with limit and market orders. Quantitative Finance 15, 1279–1291.
- Cartea, A., Jaimungal, S., Penalva, J., 2015. Algorithmic and high-frequency trading. Cambridge University Press, Cambridge, UK.
Paper not yet in RePEc: Add citation now
- Cheng, A.T., 2017. AI jumps into dark pools. URL: https://www.institutionalinvestor.com/ article/b15yx290rz5pcz/ai-jumps-into-dark-pools (visited 14/03/2020).
Paper not yet in RePEc: Add citation now
Cont, R., Kukanov, A., 2017. Optimal order placement in limit order markets. Quantitative Finance 17, 21–39.
Cont, R., Kukanov, A., Stoikov, S., 2014. The price impact of order book events. Journal of Financial Econometrics 12, 47–88.
Cummings, J.R., Frino, A., 2010. Further analysis of the speed of response to large trades in interest rate futures. Journal of Futures Markets 30, 705–724.
- Daberius, K., Granat, E., Karlsson, P., 2019. Deep execution – value and policy based reinforcement learning for trading and beating market benchmarks. SSRN Scholarly Paper ID 3374766. Social Science Research Network. Rochester, NY. URL: https://papers.ssrn.com/abstract= 3374766.
Paper not yet in RePEc: Add citation now
- Danielsson, J., Payne, R., 2001. Measuring and explaining liquidity on an electronic limit order book: Evidence from Reuters D2000-2. SSRN Scholarly Paper ID 276541. Social Science Research Network. Rochester, NY. URL: https://papers.ssrn.com/abstract=276541.
Paper not yet in RePEc: Add citation now
Degryse, H., Jong, F.D., Ravenswaaij, M.V., Wuyts, G., 2005. Aggressive orders and the resiliency of a limit order market. Review of Finance 9, 201–242.
Fischer, T.G., Krauss, C., Deinert, A., 2019. Statistical arbitrage in cryptocurrency markets. Journal of Risk and Financial Management 12, 31.
Gomber, P., Schweickert, U., Theissen, E., 2015. Liquidity dynamics in an electronic open limit order book: An event study approach. European Financial Management 21, 52–78.
Gopikrishnan, P., Plerou, V., Gabaix, X., Stanley, H.E., 2000. Statistical properties of share volume traded in financial markets. Physical Review E 62.
- Gould, M.D., Bonart, J., 2016. Queue imbalance as a one-tick-ahead price predictor in a limit order book. Market Microstructure and Liquidity 2, 1650006.
Paper not yet in RePEc: Add citation now
Gould, M.D., Porter, M.A., Williams, S., McDonald, M., Fenn, D.J., Howison, S.D., 2013. Limit order books. Quantitative Finance 13, 1709–1742.
Hendricks, D., Wilcox, D., 2014. A reinforcement learning extension to the Almgren-Chriss model for optimal trade execution, in: 2014 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), pp. 457–464.
- Kearns, M., Nevmyvaka, Y., 2013. Machine learning for market microstructure and high frequency trading, in: Easley, D., de Prado, M.L., O’Hara, M. (Eds.), High frequency trading: New realities for traders, markets, and regulators. Risk Books, London, UK.
Paper not yet in RePEc: Add citation now
- Kingma, D.P., Ba, J., 2017. Adam: A method for stochastic optimization. arXiv 1412.6980.
Paper not yet in RePEc: Add citation now
- Lam, S.K., Pitrou, A., Seibert, S., 2015. Numba: A LLVM-based Python JIT compiler, in: Proceedings of the 2nd Workshop on the LLVM Compiler Infrastructure in HPC, pp. 1–6.
Paper not yet in RePEc: Add citation now
- McKinney, W., 2010. Data structures for statistical computing in Python, in: Proceedings of the 9th Python in Science Conference, pp. 51–56.
Paper not yet in RePEc: Add citation now
- Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T.P., Harley, T., Silver, D., Kavukcuoglu, K., 2016. Asynchronous methods for deep reinforcement learning. arXiv 1602.01783.
Paper not yet in RePEc: Add citation now
- Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M., 2013. Playing Atari with deep reinforcement learning. arXiv 1312.5602.
Paper not yet in RePEc: Add citation now
- Nevmyvaka, Y., Feng, Y., Kearns, M., 2006. Reinforcement learning for optimized trade execution, in: Proceedings of the 23rd International Conference on Machine Learning, ACM, New York, NY, USA. pp. 673–680.
Paper not yet in RePEc: Add citation now
- Nevmyvaka, Y., Kearns, M., Papandreou, A., Sycara, K., 2005. Electronic trading in order-driven markets: Efficient execution, in: Proceedings of the seventh IEEE International Conference on E-Commerce Technology (CEC’05), pp. 190–197.
Paper not yet in RePEc: Add citation now
Ning, B., Ling, F.H.T., Jaimungal, S., 2018. Double deep Q-learning for optimal execution. arXiv 1812.06600.
- Noonan, L., 2017. JPMorgan develops robot to execute trades. URL: https://www.ft.com/ content/16b8ffb6-7161-11e7-aca6-c6bd07df1a3c (visited 19/09/2018).
Paper not yet in RePEc: Add citation now
Obizhaeva, A.A., Wang, J., 2013. Optimal trading strategy and supply/demand dynamics. Journal of Financial Markets 16, 1–32.
Patel, Y., 2018. Optimizing market making using multi-agent reinforcement learning. arXiv 1812.10252.
- Perold, A.F., 1988. The implementation shortfall: Paper versus reality. Journal of Portfolio Management 14, 4.
Paper not yet in RePEc: Add citation now
- Plerou, V., Gopikrishnan, P., Gabaix, X., Stanley, H.E., 2002. Quantifying stock-price response to demand fluctuations. Physical Review E 66, 027104.
Paper not yet in RePEc: Add citation now
Potters, M., Bouchaud, J.P., 2003. More statistical properties of order books and price impact. Physica A: Statistical Mechanics and its Applications 324, 133–140.
Ranaldo, A., 2004. Order aggressiveness in limit order book markets. Journal of Financial Markets 7, 53–74.
- Rantil, A., Dahlen, O., 2018. Optimized trade execution with reinforcement learning. Master ’s thesis. Linkoeping University. Sweden. URL: https://www.semanticscholar.org/ paper/Optimized-Trade-Execution-with-Reinforcement-med-Rantil-Dahl%C3%A9n/ fff05d2f0f414eead861a251aeff77f706804f6f.
Paper not yet in RePEc: Add citation now
Schnaubelt, M., 2019. A comparison of machine learning model validation schemes for nonstationary time series data. Discussion Papers in Economics 11/2019. Friedrich-AlexanderUniversität.
Schnaubelt, M., Rende, J., Krauss, C., 2019. Testing stylized facts of Bitcoin limit order books. Journal of Risk and Financial Management 12, 25.
- Schulman, J., Levine, S., Moritz, P., Jordan, M.I., Abbeel, P., 2017a. Trust region policy optimization.
Paper not yet in RePEc: Add citation now
- Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P., 2018. High-dimensional continuous control using generalized advantage estimation. arXiv 1506.02438.
Paper not yet in RePEc: Add citation now
- Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O., 2017b. Proximal policy optimization algorithms. arXiv 1707.06347.
Paper not yet in RePEc: Add citation now
- Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., Hassabis, D., 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 1140–1144.
Paper not yet in RePEc: Add citation now
Sirignano, J.A., 2019. Deep learning for limit order books. Quantitative Finance 19, 549–570.
- Sutton, R.S., Barto, A.G., 2018. Reinforcement learning: An introduction. 2nd ed., MIT Press, Cambridge, MA.
Paper not yet in RePEc: Add citation now
Tashman, L.J., 2000. Out-of-sample tests of forecasting accuracy: An analysis and review. International Journal of Forecasting 16, 437–450.
- Terekhova, M., 2017. JPMorgan takes AI use to the next level. URL: https: //www.businessinsider.de/jpmorgan-takes-ai-use-to-the-next-level-2017-8?r= US&IR=T (visited 19/09/2018).
Paper not yet in RePEc: Add citation now
- Tsoukalas, G., Wang, J., Giesecke, K., 2017. Dynamic portfolio execution. Management Science 65, 2015–2040.
Paper not yet in RePEc: Add citation now
- Van Der Walt, S., Colbert, S.C., Varoquaux, G., 2011. The NumPy array: A structure for efficient numerical computation. Computing in Science & Engineering 13, 22–30.
Paper not yet in RePEc: Add citation now
- van Hasselt, H., Guez, A., Silver, D., 2015. Deep reinforcement learning with double Q-learning.
Paper not yet in RePEc: Add citation now
- Watkins, C.J.C.H., 1989. Learning from delayed rewards. PhD Thesis. King’s College, University of Cambridge.
Paper not yet in RePEc: Add citation now
- Williams, R.J., 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8, 229–256.
Paper not yet in RePEc: Add citation now
Zhang, G., Eddy Patuwo, B., Y. Hu, M., 1998. Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting 14, 35–62.