The evolution of cooperation is a fundamental and enduring puzzle in biology and the social sciences. Hundreds of theoretical models have been proposed, but empirical research has been hindered by the generation time of social organisms and by the difficulties of quantifying costs and benefits of cooperation. The significant increase in computational power in the last decade has made artificial evolution of simple social robots a promising alternative. This thesis is concerned with the artificial evolution of groups of cooperating robots. It argues that artificial evolution of robotic agents is a powerful tool to address open questions in evolutionary biology, and shows how insights gained from the study of artificial and biological multi-agent systems can be mutually beneficial for both biology and robotics. The work presented in this thesis contributes to biology by showing how artificial evolution can be used to quantify key factors in the evolution of cooperation in biological systems and by providing an empirical test of a central part of biological theory. In addition, it reveals the importance of the genetic architecture for the evolution of efficient cooperation in groups of organisms. The work also contributes to robotics by identifying three different classes of multi-robot tasks depending on the amount of cooperation required between team members and by suggesting guidelines for the evolution of efficient robot teams. Furthermore it shows how simulations can be used to successfully evolve controllers for physical robot teams.