Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1102351.1102459acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlConference Proceedingsconference-collections
Article

A theoretical analysis of Model-Based Interval Estimation

Published: 07 August 2005 Publication History

Abstract

Several algorithms for learning near-optimal policies in Markov Decision Processes have been analyzed and proven efficient. Empirical results have suggested that Model-based Interval Estimation (MBIE) learns efficiently in practice, effectively balancing exploration and exploitation. This paper presents the first theoretical analysis of MBIE, proving its efficiency even under worst-case conditions. The paper also introduces a new performance metric, average loss, and relates it to its less "online" cousins from the literature.

References

[1]
Brafman, R. I., & Tennenholtz, M. (2002). R-MAX---a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3, 213--231.
[2]
Fiechter, C.-N. (1997). Expected mistake bound model for on-line reinforcement learning. Proceedings of the Fourteenth International Conference on Machine Learning (pp. 116--124).
[3]
Fong, P. W. L. (1995). A quantitative study of hypothesis selection. Proceedings of the Twelfth International Conference on Machine Learning (ICML-95) (pp. 226--234).
[4]
Kaelbling, L. P. (1993). Learning in embedded systems. Cambridge, MA: The MIT Press.
[5]
Kakade, S. M. (2003). On the sample complexity of reinforcement learning. Doctoral dissertation, Gatsby Computational Neuroscience Unit, University College London.
[6]
Kearns, M. J., & Singh, S. P. (2002). Near-optimal reinforcement learning in polynomial time. Machine Learning, 49, 209--232.
[7]
Puterman, M. L. (1994). Markov decision processes---discrete stochastic dynamic programming. New York, NY: John Wiley & Sons, Inc.
[8]
Strehl, A. L., & Littman, M. L. (2004). An empirical evaluation of interval estimation for Markov decision processes. The 16th IEEE International Conference on Tools with Artifical Intelligence (ICTAI-2004) (pp. 128 135).
[9]
Strehl, A. L., & Littman, M. L. (2005). A theoretical analysis of model-based interval estimation: Proofs. Forthcoming tech report, Rutgers University.
[10]
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction The MIT Press.
[11]
Voltaire (1759). Candide.
[12]
Weissman, T., Ordentlich, E., Seroussi, G., Verdu, S., & Weinberger, M. J. (2003). Inequalities for the L1 deviation of the empirical distribution (Technical Report HPL-2003-97R1). Hewlett-Packard Labs.
[13]
Wiering, M., & Schmidhuber, J. (1998). Efficient model-based exploration. Proceedings of the Fifth International Conference on, Simulation of Adaptive Behavior (SAB'98) (pp. 223 228).

Cited By

View all
  • (2024)Information-directed pessimism for offline reinforcement learningProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693080(25226-25264)Online publication date: 21-Jul-2024
  • (2024)Reinforcement Learning in the Wild with Maximum Likelihood-based Model TransferProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3662902(516-524)Online publication date: 6-May-2024
  • (2024)Uncertainty-Aware Portfolio Management With Risk-Sensitive Multiagent NetworkIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.317464235:1(362-375)Online publication date: Jan-2024
  • Show More Cited By
  1. A theoretical analysis of Model-Based Interval Estimation

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICML '05: Proceedings of the 22nd international conference on Machine learning
    August 2005
    1113 pages
    ISBN:1595931805
    DOI:10.1145/1102351
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 August 2005

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Article

    Acceptance Rates

    Overall Acceptance Rate 140 of 548 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)35
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 13 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Information-directed pessimism for offline reinforcement learningProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693080(25226-25264)Online publication date: 21-Jul-2024
    • (2024)Reinforcement Learning in the Wild with Maximum Likelihood-based Model TransferProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3662902(516-524)Online publication date: 6-May-2024
    • (2024)Uncertainty-Aware Portfolio Management With Risk-Sensitive Multiagent NetworkIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.317464235:1(362-375)Online publication date: Jan-2024
    • (2024)Comprehensive Overview of Reward Engineering and Shaping in Advancing Reinforcement Learning ApplicationsIEEE Access10.1109/ACCESS.2024.350473512(175473-175500)Online publication date: 2024
    • (2023)Recent advances in reinforcement learning in financeMathematical Finance10.1111/mafi.1238233:3(437-503)Online publication date: 7-Apr-2023
    • (2023)Leveraging transition exploratory bonus for efficient exploration in Hard-Transiting reinforcement learning problemsFuture Generation Computer Systems10.1016/j.future.2023.04.002145(442-453)Online publication date: Aug-2023
    • (2022)Conservative dual policy optimization for efficient model-based reinforcement learningProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3602115(25450-25463)Online publication date: 28-Nov-2022
    • (2022)Point and interval estimation of rock mass boreability for tunnel boring machine using an improved attribute-weighted deep belief networkActa Geotechnica10.1007/s11440-022-01651-018:4(1769-1791)Online publication date: 24-Sep-2022
    • (2022)Distribution-Free Reinforcement LearningDecision Making Under Uncertainty and Reinforcement Learning10.1007/978-3-031-07614-5_10(221-235)Online publication date: 3-Dec-2022
    • (2021)Safe policy optimization with local generalized linear function approximationsProceedings of the 35th International Conference on Neural Information Processing Systems10.5555/3540261.3541849(20759-20771)Online publication date: 6-Dec-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media