Abstract
One of the most important characteristics of intelligent activity is the ability to change behaviour according to many forms of feedback. Through learning an agent can interact with its environment to improve its performance over time. However, most of the techniques known that involves learning are time expensive, i.e., once the agent is supposed to learn over time by experimentation, the task has to be executed many times. Hence, high fidelity simulators can save a lot of time. In this context, this paper describes the framework designed to allow a team of real RoboNova-I humanoids robots to be simulated under USARSim environment. Details about the complete process of modeling and programming the robot are given, as well as the learning methodology proposed to improve robot’s performance. Due to the use of a high fidelity model, the learning algorithms can be widely explored in simulation before adapted to real robots.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Wang, J.: USARSim V2.0.2 Manual: A Game-based Simulation of the NIST Reference Arenas (2006)
Zaratti, M., Fratarcangeli, M., Iocchi, L.: A 3d simulator of multiple legged robots based on usarsim. In: Lakemeyer, G., Sklar, E., Sorrenti, D.G., Takahashi, T. (eds.) RoboCup 2006: Robot Soccer World Cup X. LNCS (LNAI), vol. 4434, pp. 13–24. Springer, Heidelberg (2006)
Greggio, N., Silvestri, G., Antonello, S., Menegatti, E., Pagello, E.: A 3d model of a humanoid for usarsim simulator. In: First Workshop on Humanoid Soccer Robots, Genova, pp. 17–24 (2006)
Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. MIT Press, USA (1998)
Watkins, C.: Learning from delayed rewards. PhD thesis, King’s College (1998)
Howden, N., Renquist, R.: Jack intelligent agents - summary of an agent infrastructure. In: 5th Int. Conf. on Autonomous Agents, Montreal, Canada (2001)
McCallum, A.K.: Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, University of Rochester, New York (1996)
Reis, L.P., Lau, N.: Fc portugal team description: Robocup 2000 simulation league champion. In: Stone, P., Balch, T., Kraetzschmar, G.K. (eds.) RoboCup 2000. LNCS (LNAI), vol. 2019, pp. 29–40. Springer, Heidelberg (2001)
The Unreal Engine Site (2007), http://wiki.beyondunreal.com/wiki/Karma
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Colombini, E.L., da Silva Simöes, A., Martins, A.C.G., Matsuura, J.P. (2008). A Framework for Learning in Humanoid Simulated Robots. In: Visser, U., Ribeiro, F., Ohashi, T., Dellaert, F. (eds) RoboCup 2007: Robot Soccer World Cup XI. RoboCup 2007. Lecture Notes in Computer Science(), vol 5001. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-68847-1_34
Download citation
DOI: https://doi.org/10.1007/978-3-540-68847-1_34
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-68846-4
Online ISBN: 978-3-540-68847-1
eBook Packages: Computer ScienceComputer Science (R0)