Cross-validation study of methods and technologies to assess mental models in a complex problem solving situation

作者:

Highlights:

摘要

This paper reports a cross-validation study aimed at identifying reliable and valid assessment methods and technologies for natural language (i.e., written text) responses to complex problem-solving scenarios. In order to investigate current assessment technologies for text-based responses to problem-solving scenarios (i.e., ALA-Reader and T-MITOCAR), this study compared the two best developed technologies to an alternative methodology. Comparisons amongst the three models (benchmark, ALA-Reader, and T-MITOCAR) provided two findings: (a) the benchmark model created the most descriptive concept maps; and (b) the ALA-Reader model had a higher correlation with the benchmark model than did T-MITOCAR’s. The results imply that the benchmark model is a viable alternative to the two existing technologies and is worth exploring in a larger scale study.

论文关键词:Assessment technology,Concept map,Mental models,Problem solving,Validation study

论文评审过程:Available online 16 December 2011.

论文官网地址:https://doi.org/10.1016/j.chb.2011.11.018