Files

Abstract

Peer assessment is seen as a powerful supporting tool to achieve scalability in the evaluation of complex assignments in large courses, possibly virtual ones, as in the context of massive open online courses (MOOCs). However, the adoption of peer assessment is slow, due in part to the lack of ready-to-use systems. Furthermore, the validity of peer assessment is still under discussion. In this paper, in order to tackle some of these issues, we present a dataset containing the assessment of student submissions by student peers and by instructors during our Social Media course with 60 Master’s level university students. The dataset allows for training and testing algorithms that predict the grades of instructors based on the grades of student peers.

Details

Actions

Preview