Shifting Vocabulary Bias in Speedup Learning

作者:Devika Subramanian

摘要

In this paper, we describe a domain-independent principle for justified shifts of vocabulary bias in speedup learning. This principle advocates the minimization of wasted computational effort. It explains as well as generates a special class of granularity shifts. We describe its automation for definite as well as stratified Horn theories, and present an implementation for a general class of reachability computations.

论文关键词:vocabulary bias, speedup learning, representation shifts, abstractions

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1022642320072