Human activity recognition is a thriving research field. There are lots of studies in different sub-areas of activity recognition proposing different methods. However, unlike other applications, there is lack of established benchmarking problems for activity recognition. Typically, each research group tests and reports the performance of their algorithms on their own datasets using experimental setups specially conceived for that specific purpose. In this work, we introduce a versatile human activity dataset conceived to fill that void. We illustrate its use by presenting comparative results of different classification techniques, and discuss about several metrics that can be used to assess their performance. Being an initial benchmarking, we expect that the possibility to replicate and outperform the presented results will contribute to further advances in state-of-the-art methods.