The ability to move in complex environments is a fundamental requirement for robots to be a part of our daily lives. While in simple environments it is usually straightforward for human designers to foresee the different conditions a robot will be exposed to, for more complex environments the human design of high-performing controllers becomes a challenging task, especially when the on-board resources of the robots are limited. In this article, we use a distributed implementation of Particle Swarm Optimization to design robotic controllers that are able to navigate around obstacles of different shape and size. We analyze how the behavior and performance of the controllers differ based on the environment where learning takes place, showing that different arenas lead to different avoidance behaviors. We also test the best controllers in environments not encountered during learning, both in simulation and with real robots, and show that no single learning environment is able to generate a behavior general and robust enough to succeed in all testing environments.