Abstract

Context: The number of students enrolled in universities at standard and on-line programming courses is rapidly increasing. This calls for automated evaluation of students assignments. Objective: We aim to develop methods and tools for objective and reljable automated grading that can also provide substantial and comprehensible feedback. Our approach targets introductory programming courses, which have a number of specific features and goals. The benefits are twofold: reducing the workload for teachers, and providing helpful feedback to students in the process of learning. Method: For sophisticated automated evaluation of students' programs, our grading framework combines results of three approaches (i) testing, (ii) software verification, and (iii) control flow graph similarity measurement. We present our tools for software verification and control flow graph similarity measurement, which are publicly available and open source. The tools are based on an intermediate code representation, so they could be applied to a number of programming languages. Results: Empirical evaluation of the proposed grading framework is performed on a corpus of programs written by university students in programming language C within an introductory programming course. Results of the evaluation show that the synergy of proposed approaches improves the quality and precision of automated grading and that automatically generated grades are highly correlated with instructor-assigned grades. Also, the results show that our approach can be trained to adapt to teacher's grading style. Conclusions: In this paper we integrate several techniques for evaluation of student's assignments. The obtained results suggest that the presented tools can find real-world applications in automated grading. (C) 2012 Elsevier B.V. All rights reserved.

Details

Actions