The idea of moving robots or prosthetic devices not by manual control, but by mere thinking (i.e., the brain activity of human subjects) has fascinated researchers for the last 30 years, but it is only now that first experiments have shown the possibility to do so. How can brainwaves be used to directly control robots? Most of the hope for braincontrolled robots comes from invasive approaches that provide detailed single neuron activity recorded from microelectrodes implanted in the brain . The motivation for these invasive approaches is that it has been widely shown that motor parameters related to hand and arm movements are encoded in a distributed and redundant way by ensembles of neurons in the motor system of the brainï¿½motor, premotor and posterior parietal cortex. For humans, however, it is preferable to use non-invasive approaches to avoid health risks and the associated ethical concerns. Most non-invasive brain-computer interfaces (BCI) use electroencephalogram (EEG) signals; i.e., the electrical brain activity recorded from electrodes placed on the scalp. The main source of the EEG is the synchronous activity of thousands of cortical neurons. Thus, EEG signals suffer from a reduced spatial resolution and increased noise due to measurements on the scalp. As a consequence, current EEG-based brain-actuated devices are limited by a low channel capacity and are considered too slow for controlling rapid and complex sequences of robot movements. But, recently, we have shown for the first time that online analysis of EEG signals, if used in combination with advanced robotics and machine learning techniques, is sufficient for humans to continuously control a mobile robot  and a wheelchair . In this article we will review our work on non-invasive brain-controlled robots and discuss some of the challenges ahead.