This paper surveys some of the main ethical issues pertaining to robotics that have been discussed in the literature. We start with the idea of responsibility ascription that arises when an autonomous system malfunctions or harms people. Next, we discuss various ethical issues emerging in two sets of robotic applications: service robots that peacefully interact with humans and lethal robots created to fight in battlefields. Then, we provide a short overview of machine ethics, a new research trend that aims at designing and implementing artificial systems with "morally" acceptable behavior. We then highlight resulting gaps in legislation, and discuss the need for guidelines to regulate the creation and deployment of such autonomous systems. Often, when designing such systems, the benefits tend to overshadow partly unknown but potentially large negative consequences.