Identifying items for moderation in a peer assessment framework

作者:

Highlights:

摘要

Peer assessment can be considered in the framework of group decision making and hence take advantage of many of the proposed methods and evaluation processes. Despite the potential of peer assessment to greatly reduce the workload of educators, a key hurdle to its uptake is its perceived reliability, with there being the preconception that peers may not be as reliable or fair as ‘experts’. In this contribution, we consider approaches to moderation with the aim of increasing the accuracy of scores given while reducing the total workload of the subject experts (or lecturers in the university context). Firstly, we propose several indices, which, in combination can be used to estimate the reliability of peer markers. Secondly, we consider the consensus of scores received by peers. We hence approach the problem of reliability from two angles, and from these considerations can identify a subset of peers whose results should be flagged for moderation. We conduct some numerical experiments to investigate the potential for these techniques in the context of peer assessment with heterogeneous marking behaviors.

论文关键词:Peer assessment,Decision making,Aggregation functions,Moderation,Variation indices

论文评审过程:Received 31 January 2018, Revised 9 May 2018, Accepted 22 May 2018, Available online 24 May 2018, Version of Record 5 December 2018.

论文官网地址:https://doi.org/10.1016/j.knosys.2018.05.032