Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. A Comparison of PSO and Reinforcement Learning for Multi-Robot Obstacle Avoidance
 
conference paper

A Comparison of PSO and Reinforcement Learning for Multi-Robot Obstacle Avoidance

Di Mario, Ezequiel  
•
Talebpour, Zeynab  
•
Martinoli, Alcherio  
2013
2013 IEEE Congress on Evolutionary Computation (CEC)
IEEE Congress on Evolutionary Computation

The design of high-performing robotic controllers constitutes an example of expensive optimization in uncertain environments due to the often large parameter space and noisy performance metrics. There are several evaluative techniques that can be employed for on-line controller design. Adequate benchmarks help in the choice of the right algorithm in terms of final performance and evaluation time. In this paper, we use multi-robot obstacle avoidance as a benchmark to compare two different evaluative learning techniques: Particle Swarm Optimization and Q-learning. For Q-learning, we implement two different approaches: one with discrete states and discrete actions, and another one with discrete actions but a continuous state space. We show that continuous PSO has the highest fitness overall, and Q-learning with continuous states performs significantly better than Q-learning with discrete states. We also show that in the single robot case, PSO and Q-learning with discrete states require a similar amount of total learning time to converge, while the time required with Q-learning with continuous states is significantly larger. In the multi-robot case, both Q-learning approaches require a similar amount of time as in the single robot case, but the time required by PSO can be significantly reduced due to the distributed nature of the algorithm.

  • Files
  • Details
  • Metrics
Type
conference paper
DOI
10.1109/CEC.2013.6557565
Web of Science ID

WOS:000326235300020

Author(s)
Di Mario, Ezequiel  
Talebpour, Zeynab  
Martinoli, Alcherio  
Date Issued

2013

Publisher

IEEE

Publisher place

New York

Published in
2013 IEEE Congress on Evolutionary Computation (CEC)
ISBN of the book

978-1-4799-0454-9

Total of pages

8

Start page

149

End page

156

Subjects

Obstacle Avoidance

•

Q-Learning

•

Reinforcement Learning

•

Particle Swarm Optimization

•

Robotics

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
NCCR-ROBOTICS  
DISAL  
Event nameEvent placeEvent date
IEEE Congress on Evolutionary Computation

Cancún, México

June 20-23, 2013

Available on Infoscience
May 20, 2013
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/92326
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés