Content-based methods in peer assessment of open-response questions to grade students as authors and as graders

作者:

Highlights:

摘要

Massive Open Online Courses (MOOCs) use different types of assignments in order to evaluate student knowledge. Multiple-choice tests are particularly apt given the possibility for automatic assessment of large numbers of assignments. However, certain skills require open responses that cannot be assessed automatically yet their evaluation by instructors or teaching assistants is unfeasible given the large number of students. A potentially effective solution is peer assessment whereby students grade the answers of other students. However, to avoid bias due to inexperience, such grades must be filtered. We describe a factorization approach to grading, as a scalable method capable of dealing with very high volumes of data. Our method is also capable of representing open-response content using a vector space model of the answers. Since reliable peer assessment requires students to make coherent assessments, students can be motivated by their assessments reflecting not only their own answers but also their efforts as graders. The method described is able to tackle both these aspects simultaneously. Finally, for a real-world university setting in Spain, we compared grades obtained by our method and grades awarded by university instructors, with results indicating a notable improvement from using a content-based approach. There was no evidence that instructor grading would have led to more accurate grading outcomes than the assessment produced by our models.

论文关键词:Peer assessment,Factorization,Preference learning,Grading graders,MOOCs

论文评审过程:Received 15 February 2016, Revised 9 May 2016, Accepted 19 June 2016, Available online 20 June 2016, Version of Record 20 December 2016.

论文官网地址:https://doi.org/10.1016/j.knosys.2016.06.024