Relational preference rules for control

作者:

Highlights:

摘要

Value functions are defined over a fixed set of outcomes. In work on preference handling in AI, these outcomes are usually a set of assignments over a fixed set of state variables. If the set of variables changes, a new value function must be elicited. Given that in most applications the state variables are properties (attributes) of objects in the world, this implies that the introduction of new objects requires re-elicitation of preferences. However, often, the user has in mind preferential information that is much more generic, and which is relevant to a given type of domain regardless of the precise number of objects of each kind and their properties. Such information requires the introduction of relational models. Following in the footsteps of work on probabilistic relational models (PRMs), we suggest in this work a rule-based, relational language of preferences. This language extends regular rule-based languages and leads to a much more flexible approach for specifying control rules for autonomous systems. It also extends standard generalized-additive value functions to handle a dynamic universe of objects. Given any specific set of objects this specification induces a generalized-additive value function over assignments to the controllable attributes associated with these objects. We then describe a prototype of a decision support system for command and control centers we developed to illustrate and study the use of these rules.

论文关键词:Preference models,Preference rules,Relational models,Rule-based systems,Command and control automation

论文评审过程:Received 28 February 2009, Revised 4 July 2010, Accepted 4 July 2010, Available online 1 December 2010.

论文官网地址:https://doi.org/10.1016/j.artint.2010.11.010