Complexity-based induction

作者:Darrell Conklin, Ian H. Witten

摘要

A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure calledmodel complexity is introduced, and its use is illustrated and compared with aproof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof-theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.

论文关键词:Inductive logic programming, data compression, minimum description length principle, model complexity, learning from positive-only examples, theory preference criterion

论文评审过程:

论文官网地址:https://doi.org/10.1007/BF00993307