Files

Abstract

The number of students enrolled in universities at standard and online programming courses is huge. This calls for automated evaluation of students assignments and for automated support for learning. We aim at developing methods and tools for objective and reliable automated grading that can also provide substantial and comprehensible feedback. The benefits should be twofold --- reducing the workload for teachers and providing high quality feedback to students in the process of learning. We introduce software verification and control flow graph similarity measurement in automated evaluation of students' programs. Our new grading framework merges outcomes obtained by combination of these two approaches with outcomes obtained by automated testing. We present our corresponding tools that are publicly available and open source. The tools are based on a low-level intermediate code representation which enables that they can be applied to a number of programming languages. Experimental evaluation of the proposed grading framework is performed on a corpus of university students' programs written in programming language C. Results of the experiments show that the synergy of proposed approaches leads to improved quality and precision of automated grading and that automatically generated grades are highly correlated with manually determined grades. Also, the results show that our approach can be trained to adapt to teacher's grading style. In this paper we integrate several techniques for evaluation of student's assignments. Obtained experimental results suggest that the presented tools can find real-world applications in studying and grading.

Details

PDF