Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1102351.1102411acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlConference Proceedingsconference-collections
Article

Relating reinforcement learning performance to classification performance

Published: 07 August 2005 Publication History

Abstract

We prove a quantitative connection between the expected sum of rewards of a policy and binary classification performance on created subproblems. This connection holds without any unobservable assumptions (no assumption of independence, small mixing time, fully observable states, or even hidden states) and the resulting statement is independent of the number of states or actions. The statement is critically dependent on the size of the rewards and prediction performance of the created classifiers.We also provide some general guidelines for obtaining good classification performance on the created subproblems. In particular, we discuss possible methods for generating training examples for a classifier learning algorithm.

References

[1]
Bagnell, D. (2004). Personal communication.
[2]
Bagnell, D., Kakade, S., Ng, A., & Schneider, J. (2004). Policy search by dynamic programming. Advances in Neural Information Processing Systems 16 (NIPS*2003).
[3]
Baird, L. C. (1993). Advantage updating (Technical Report). Wright Laboratory.
[4]
Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neurodynamic programming. Athena Scientific.
[5]
Beygelzimer, A., Dani, V., Hayes, T., Langford, J., & Zadrozny, B. (2004). Reductions between classification tasks. Preprint at http://hunch.net/~jl/projects/reductions/mc_to_c/mc_submission.ps.
[6]
Evan-Dar, E., Kakade, S., & Mansour, Y. (2005). Reinforcement learning in POMDPs without resets. Preprint at http://www.cis.upenn.edu/~skakade/papers/rl/learn_pomdp.ps.
[7]
Fern, A., Yoon, S., & Givan, R. (2004). Approximate policy iteration with a policy language bias. Advances in Neural Information Processing Systems 16 (NIPS*2003).
[8]
Kakade, S., & Langford, J. (2002). Approximately optimal approximate reinforcement learning. Proceedings of the Nineteenth International Conference on Machine Learning (ICML-2002) (pp. 267--274).
[9]
Kearns, M., Ng, A., & Mansour, Y. (1999). A sparse sampling algorithm for near-optimal planning in large markov decision processes. Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI-1999) (pp. 1324--1331).
[10]
Kearns, M., Ng, A., & Mansour, Y. (2000). Approximate planning in large pomdps via reusable trajectories. Advances in Neural Information Processing Systems 12 (NIPS*1999).
[11]
Kearns, M., & Singh, S. (1998). Near optimal reinforcement learning in polynomial time. Proceedings of the Fifteenth International Conference on Machine Learning (ICML-1998) (pp. 260--268).
[12]
Khardon, R. (1999). Learning to take actions. Machine Learning, 35, 57--90.
[13]
Lagoudakis, M., & Parr, R. (2003). Reinforcement learning as classification: Leveraging modern classifiers. Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (pp. 424--431).
[14]
Langford, J., & Beygelzimer, A. (2005). Sensitive error correcting output codes. Proceedings of the Eighteenth Annual Conference on Learning Theory (COLT-2005). To appear.
[15]
Langford, J., & Zadrozny, B. (2003). Reducing T-step reinforcement learning to classification. Preprint at http://hunch.net/~jl/projects/reductions/RL_to_class/colt_submission.ps%.
[16]
Langford, J., & Zadrozny, B. (2005). Estimating class membership probabilities using classifier learners. Proceedings of the Tenth International Workshop on AI and Statistics (AISTATS-2005) (pp. 198--205).
[17]
Singh, S., James, M. R., & Rudary, M. R. (2004). Predictive state representations: A new theory for modeling dynamical systems. Proceedings of the Twentieth Annual Conference on Uncertainty in Artificial Intelligence (UAI-2004) (pp. 512--519).
[18]
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. MIT Press.
[19]
Watkins, C. (1989). Learning from delayed rewards. Doctoral dissertation, Cambridge University.
[20]
Zadrozny, B., Langford, J., & Abe, N. (2003). Cost-sensitive classification by cost-proportionate example weighting. Proceedings of the International Conference on Data Mining (ICDM-2003) (pp. 435--442).

Cited By

View all
  • (2022)A reduction to binary approach for debiasing multiclass datasetsProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3600450(2480-2493)Online publication date: 28-Nov-2022
  • (2022)Imitation Learning of Neural Spatio-Temporal Point ProcessesIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2021.305478734:11(5391-5402)Online publication date: 1-Nov-2022
  • (2022)Adaptive Algorithms for Meta-InductionJournal for General Philosophy of Science10.1007/s10838-021-09590-254:3(433-450)Online publication date: 7-Oct-2022
  • Show More Cited By
  1. Relating reinforcement learning performance to classification performance

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ICML '05: Proceedings of the 22nd international conference on Machine learning
      August 2005
      1113 pages
      ISBN:1595931805
      DOI:10.1145/1102351
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 August 2005

      Permissions

      Request permissions for this article.

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate 140 of 548 submissions, 26%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)17
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 21 Sep 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2022)A reduction to binary approach for debiasing multiclass datasetsProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3600450(2480-2493)Online publication date: 28-Nov-2022
      • (2022)Imitation Learning of Neural Spatio-Temporal Point ProcessesIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2021.305478734:11(5391-5402)Online publication date: 1-Nov-2022
      • (2022)Adaptive Algorithms for Meta-InductionJournal for General Philosophy of Science10.1007/s10838-021-09590-254:3(433-450)Online publication date: 7-Oct-2022
      • (2021)Reinforcement LearningMachine Learning10.1007/978-981-15-1967-3_16(399-430)Online publication date: 21-Aug-2021
      • (2017)PbMMDEngineering Applications of Artificial Intelligence10.1016/j.engappai.2016.12.00860:C(57-70)Online publication date: 1-Apr-2017
      • (2016)Optimal Behavior is Easier to Learn than the TruthMinds and Machines10.1007/s11023-016-9389-y26:3(243-252)Online publication date: 1-Sep-2016
      • (2016)Anti Imitation-Based Policy LearningMachine Learning and Knowledge Discovery in Databases10.1007/978-3-319-46227-1_35(559-575)Online publication date: 4-Sep-2016
      • (2015)A classification-based approach to the optimal control of affine switched systems2015 54th IEEE Conference on Decision and Control (CDC)10.1109/CDC.2015.7402667(2963-2968)Online publication date: Dec-2015
      • (2013)Reinforcement learning in robotics: A surveyThe International Journal of Robotics Research10.1177/027836491349572132:11(1238-1274)Online publication date: 23-Aug-2013
      • (2013)Learning from Demonstrations: Is It Worth Estimating a Reward Function?Advanced Information Systems Engineering10.1007/978-3-642-40988-2_2(17-32)Online publication date: 2013
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media