Evolving Reinforcement Learning-Like Abilities for Robots

In [8] Yamauchi and Beer explored the abilities of continuous time recurrent neural networks (CTRNNs) to display reinforcement-learning like abilities. The investigated tasks were generation and learning of short bit sequences. This "learning'' came about without modifications of synaptic strengths, but simply from internal dynamics of the evolved networks. In this paper this approach will be extended to two embodied agent tasks, where simulated robots have acquire and retain "knowledge'' while moving around different mazes. The evolved controllers are analyzed and the results are discussed.


Editor(s):
Tyrrell, Andy M.
Haddow, Pauline C.
Torresen, Jim
Published in:
Evolvable Systems: From Biology to Hardware. ICES 2003, 320-331
Presented at:
5th International Conference on Evolvable Systems (ICES'03), Trondheim, Norway, 17-20 March
Year:
2003
Publisher:
Berlin, Springer
ISBN:
978-3-540-00730-2
Keywords:
Other identifiers:
Laboratories:




 Record created 2006-01-12, last modified 2018-04-21

n/a:
Download fulltextPDF
External link:
Download fulltextURL
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)