In the last years there has been an increasing interest on using human feedback during robot operation to incorporate non-expert human expertise while learning complex tasks. Most work has considered reinforcement learning frameworks were human feedback, provided through multiple modalities (speech, graphical interfaces, gestures) is converted into a reward. This paper explores a different communication channel: cognitive EEG brain signals related to the perception of errors by humans. In particular, we consider error potentials (ErrP), voltage deflections appearing when a user perceives an error, either committed by herself or by an external machine, thus encoding binary information about how a robot is performing a task. Based on this potential, we propose an algorithm based on policy matching for inverse reinforcement learning to infer the user goal from brain signals. We present two cases of study involving a target reaching task in a grid world and using a real mobile robot, respectively. For discrete worlds, the results show that the robot is able to infer and reach the target using only error potentials as feedback elicited from human observation. Finally, promising preliminary results were obtained for continuous states and actions in real scenarios.