Policy Optimization with Second-Order Advantage Information
Policy Optimization with Second-Order Advantage Information
Jiajin Li, Baoxiang Wang, Shengyu Zhang
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 5038-5044.
https://doi.org/10.24963/ijcai.2018/699
Policy optimization on high-dimensional continuous control tasks exhibits its difficulty caused by the large variance of the policy gradient estimators. We present the action subspace dependent gradient (ASDG) estimator which incorporates the Rao-Blackwell theorem (RB) and Control Variates (CV) into a unified framework to reduce the variance. To invoke RB, our proposed algorithm (POSA) learns the underlying factorization structure among the action space based on the second-order advantage information. POSA captures the quadratic information explicitly and efficiently by utilizing the wide \& deep architecture. Empirical studies show that our proposed approach demonstrates the performance improvements on high-dimensional synthetic settings and OpenAI Gym's MuJoCo continuous control tasks.
Keywords:
Machine Learning: Reinforcement Learning
Planning and Scheduling: Markov Decisions Processes
Uncertainty in AI: Markov Decision Processes