Democratic approximation of lexicographic preference models

作者:

Highlights:

摘要

Lexicographic preference models (LPMs) are an intuitive representation that corresponds to many real-world preferences exhibited by human decision makers. Previous algorithms for learning LPMs produce a “best guess” LPM that is consistent with the observations. Our approach is more democratic: we do not commit to a single LPM. Instead, we approximate the target using the votes of a collection of consistent LPMs. We present two variations of this method—variable voting and model voting—and empirically show that these democratic algorithms outperform the existing methods. Versions of these democratic algorithms are presented in both the case where the preferred values of attributes are known and the case where they are unknown. We also introduce an intuitive yet powerful form of background knowledge to prune some of the possible LPMs. We demonstrate how this background knowledge can be incorporated into variable and model voting and show that doing so improves performance significantly, especially when the number of observations is small.

论文关键词:Lexicographic models,Preference learning,Bayesian methods

论文评审过程:Received 27 February 2009, Revised 5 August 2010, Accepted 5 August 2010, Available online 2 December 2010.

论文官网地址:https://doi.org/10.1016/j.artint.2010.11.012