Evolving Reinforcement Learning-Like Abilities for Robots
In [8] Yamauchi and Beer explored the abilities of continuous time recurrent neural networks (CTRNNs) to display reinforcement-learning like abilities. The investigated tasks were generation and learning of short bit sequences. This "learning'' came about without modifications of synaptic strengths, but simply from internal dynamics of the evolved networks. In this paper this approach will be extended to two embodied agent tasks, where simulated robots have acquire and retain "knowledge'' while moving around different mazes. The evolved controllers are analyzed and the results are discussed.
WOS:000182975200029
2003
Berlin
978-3-540-00730-2
Lecture Notes in Computer Science; 2606
320
331
REVIEWED
EPFL
Event name | Event place | Event date |
Trondheim, Norway | 17-20 March | |