Formal Autograding in a Classroom (Experience Report)
We report our first experience in enhancing automated grading in an undergraduate programming course using formal verification. In our experiment, we deploy a program verifier to check the equivalence between student submissions and reference solutions, alongside the existing, testing-based grading infrastructure. We find that we can use the conservative nature of program equivalence checking to our advantage. We were able to use such equivalence to differentiate student submissions according to their high-level program structure, in particular their recursion pattern, even when their input-output behaviour is identical. Consequently, we achieve (1) higher confidence in correctness of idiomatic solutions but also (2) more thorough assessment of solution landscape that reveals solutions beyond those envisioned by instructors.
Formal_Autograding_in_a_Classroom-6.pdf
main document
openaccess
CC BY-NC-ND
3.63 MB
Adobe PDF
f60fa9988523d94624f538303dd2f0f5